Growth & Strategy
Last year, a potential client approached me with an exciting opportunity: build a two-sided marketplace platform. The budget was substantial, the technical challenge was interesting, and it would have been one of my biggest projects to date.
I said no.
Now, you might think I'm crazy for turning down that kind of money. But here's what I learned from that conversation—and from watching dozens of startups waste months building products nobody wants: your first MVP shouldn't be a product at all.
This client came to me excited about no-code tools and AI platforms, thinking they could quickly validate their marketplace idea. They weren't wrong about the technology—you can build complex platforms faster than ever. But they were asking the wrong question entirely.
In this playbook, you'll discover:
Why "lovable" MVPs often become expensive validation traps
The critical difference between testing an idea and testing demand
My 4-step framework for deciding when to build vs. when to validate manually
Real examples of when building early makes sense (and when it absolutely doesn't)
How to validate a complex platform idea in days, not months
If you're considering building an MVP—especially a "lovable" one—this might save you from an expensive mistake. Check out more startup validation strategies in our growth playbooks and product-market fit guides.
Walk into any startup accelerator or scroll through Product Hunt, and you'll hear the same advice over and over: "Build a lovable MVP." The logic seems sound—create something so compelling that early users can't help but fall in love with it.
Here's what every startup guru tells you about lovable MVPs:
Focus on user experience from day one - Make it beautiful, intuitive, and delightful
Include core features that solve the main problem - Don't ship something broken or incomplete
Get user feedback quickly - Launch fast and iterate based on real usage
Build something you'd use yourself - Passion for your own product creates better outcomes
Use modern tools to build faster - No-code platforms and AI can accelerate development
This advice exists because it worked for some famous companies. Slack's initial product was genuinely lovable. Airbnb's early website created emotional connection. Instagram's simple photo filters delighted users from day one.
The problem? These examples create survivorship bias. For every lovable MVP that succeeded, hundreds failed not because they weren't lovable enough, but because they were solving problems that didn't exist or targeting markets that weren't ready.
The conventional wisdom treats "lovable" as the solution to low user adoption. But what if the real issue isn't your product's lovability—it's whether anyone actually wants what you're building in the first place?
Who am I
7 years of freelance experience working with SaaS
and Ecommerce brands.
The client who contacted me had done their homework. They'd identified a gap in the market, researched the competition, and even mapped out user personas. Their marketplace idea would connect service providers with businesses in a specific niche—think Upwork, but for a specialized industry.
"We want to see if our idea is worth pursuing," they told me. "We've heard these new AI tools can build anything quickly and cheaply."
They weren't wrong about the technology. I could have built their two-sided marketplace using platforms like Bubble or even newer AI-powered tools. The technical execution would have been straightforward—user registration, matching algorithms, payment processing, review systems.
But their core statement revealed the fundamental problem: they wanted to build something to test if people wanted it. That's backwards thinking that leads to expensive validation.
I dug deeper into their plan. They had:
No existing audience on either side of the marketplace
No validated customer base proving demand
No evidence that their target market would use a platform solution
Just enthusiasm and a logical-sounding hypothesis
This reminded me of another client situation I'd encountered before—a SaaS founder who spent months building a "perfect" onboarding flow, only to discover their core value proposition was wrong. The product was beautifully designed and technically solid, but it solved a problem people didn't actually have.
That's when I realized this marketplace client was about to make the same expensive mistake. They were treating their MVP like a product validation tool when what they needed was a market validation process.
My experiments
What I ended up doing and the results.
Instead of taking their money and building what they asked for, I shared a completely different approach. Here's exactly what I told them, and the framework I use to decide when building makes sense versus when manual validation is the smarter path.
Step 1: The One-Day Validation Test
I suggested they spend one day—not three months—testing their core hypothesis. Create a simple landing page explaining their marketplace concept. Write copy that clearly articulates the value proposition for both sides. Then manually drive traffic through their existing networks and measure genuine interest.
"If you can't get 100 people to sign up for updates about your marketplace in one day of focused effort," I told them, "you definitely can't build a successful platform."
Step 2: Manual Matchmaking Week
Instead of building matching algorithms, I recommended they spend one week manually connecting service providers with potential customers via email and phone calls. This would test whether the value proposition actually worked in practice—whether people would pay, whether service quality met expectations, whether the business model made sense.
Step 3: Demand Validation Before Supply Building
Based on my experience with marketplaces, the hardest part isn't the technology—it's achieving network effects. I suggested they focus first on proving they could consistently generate demand (customers looking for services), then worry about building supply (service providers).
Step 4: Build Only After Manual Process Breaks
My rule: only start building when manual processes become the limiting factor. If they could manually match 10 customers with service providers monthly and prove the economics worked, then they'd have a foundation for building technology to scale beyond what humans could handle.
This approach flips the conventional MVP wisdom. Instead of building to test, you validate manually first, then build to scale what already works.
The client initially pushed back on my recommendation. "But won't manual validation take forever? These AI tools can build things so quickly now."
That's exactly the trap. Yes, you can build faster than ever. But building the wrong thing faster doesn't make it right—it just makes the failure more expensive.
After our conversation, they decided to follow the manual validation approach. Within two weeks, they discovered something crucial: their target market preferred working with vetted individual service providers rather than choosing from a marketplace of options. The "choice and convenience" they thought people wanted actually created decision paralysis.
By month two, they'd pivoted to a completely different model—a curated service where they personally matched clients with pre-screened providers. No platform needed. No complex technology required. Just a simple CRM and a proven process for quality control.
This validation approach saved them months of development time and thousands in building costs. More importantly, it led them to a business model that actually worked for their market.
Learnings
Sharing so you don't make them.
Here are the key lessons I've learned about when to build versus when to validate manually:
Build for scale, not for validation - If you're testing whether people want something, manual processes always give clearer, faster feedback than software
Technical feasibility isn't the bottleneck anymore - In 2025, almost anything can be built quickly. The constraint is understanding what should be built
Marketplace ideas are especially dangerous to build early - Network effects can't be coded; they have to be grown through real relationship building
"Lovable" often masks fundamental business model problems - A beautifully designed product that solves the wrong problem is still a failure
Manual validation reveals operational challenges - You'll discover workflow issues, customer service needs, and business model flaws that no amount of user research can uncover
Speed of building isn't the competitive advantage - Understanding your market and executing on the right solution is what matters
The best time to build is when you're overwhelmed by demand - Technology should solve growth problems, not create them
The marketplace client taught me that the question isn't "who should build a lovable MVP?" It's "who should build an MVP at all?" In most cases, the answer is: only after you've manually proven the business model works.
My playbook, condensed for your use case.
For SaaS startups:
Start with manual onboarding and customer success before building self-service features
Validate pricing and feature demand through consultative sales first
Build automation only when manual processes become the growth bottleneck
For ecommerce businesses:
Test product demand through manual fulfillment and customer service first
Validate marketplace concepts by manually matching buyers and sellers
Build platform features only after proving unit economics work manually
What I've learned