Idea validation tests are often taken to an extreme, and this could be dangerous if done carelessly.
The tech startup industry is rarely in a consensus of the best practices that would allow a founder to build a successful startup project. The reason for that is that the startup field is a complex system, and the right culture, behavior, and priorities for one project don’t necessarily apply directly to another. If the environment is different, then the path that a new startup founder would have to walk would be fundamentally different than the path that other successful startup founders have walked in the past.
Despite that, the rule of thumb that you need to validate your idea before you invest time and money into it is not only widely accepted as correct, but to a degree pointed at as the single most important step of running an early-stage startup project.
In his book The Lean Startup, Eric Ries was one of the first to document this new mindset of approaching startup projects. Once it was out, the lean mindset took a life of its own, and nowadays the standard practice is to push the framework to new extremes.
For example, instead of developing a minimum viable product in order to test it against a minimum viable market, a standard practice nowadays is to validate through presales even before starting to build anything tangible. All you need to do is to outline the things you intend to build, put them on a landing page, and see if customers are responding positively or negatively to your offering.
If they are responding negatively – great, you just saved yourself a lot of time, money, and effort, and you’ll be able to iterate and even pivot extremely cheaply because you haven’t built anything yet.
It’s hard to criticize this method as the practical benefits are enormous, and the only drawback is that the founder would have to swallow their ego and try to make people give them money for a product that doesn’t exist yet.
However, is it possible that taking the lean startup methodology to an extreme could backfire and produce unintended consequences?
Let us explore a hypothetical situation. You are trying to validate and A/B test an online video game idea. You run Facebook ads for two different landing pages – one for your idea in a mobile game form, and one for your idea on PC. From running the validation test you see that the mobile version is getting more interest, which leads you to choose option A for your MVP.
While this validation test showcased that there are more people interested in mobile games in the market segments you tested, it doesn’t reveal many other important factors.
For example, the customer acquisition cost for mobile games is astronomical due to the aggressive marketing standards and overwhelming competition in that industry. For a PC game, this is not necessarily the case. A PC title is more likely to gain organic traction thanks to the active and engaged indie game community on that platform.
Moreover, this validation test assumes that you can produce a similar quality product for both options. This is rarely the case – developers have their own expertise and domain knowledge, and even if the test showcases the right choice from an idea-market fit standpoint, this might not translate to real product-market fit because of a lack of ability to deliver.
A higher-quality product in a market with lower barriers to entry might be a much safer bet, even though the market is smaller.
In summary, this example displays that validation tests don’t exist in a vacuum. Doing a test for the sake of doing it alone could lead to confusion or even worse – it can be deceiving. You need to carefully consider your circumstances and only test for carefully chosen variables that give a clear answer to the main question – is your main value proposition received well by your target market or not.
Source by www.forbes.com