I Built a SaaS That AI Could Clone in an Afternoon — Then I Figured Out Why
I built a SaaS and realized AI could replicate it in an afternoon. The problem wasn't the features — I was playing on the wrong layer. This forced me to map out a "Stack Ownership Spectrum," and led to a counterintuitive discovery: going lighter isn't safe either.
I built a SaaS called Traction. It helps indie developers track product growth metrics.
Technically, it worked fine — the dashboard was functional, the data pipeline ran smoothly. But after launch, I realized something: another developer could spin up a working MVP with Cursor in a single afternoon. Nothing was stopping them. My UI? AI-generated. My business logic? No proprietary data. My user experience? One prompt away from being cloned.
To be fair, dashboard products were never high-barrier. But AI turned “not high-barrier” into “zero barrier” — and that difference is qualitative, not quantitative.
I started asking: where exactly is the problem? It’s not that the features are bad or the design is lacking. I was playing on the wrong layer. I’d poured all my effort into the most replicable part of the stack — UI and business flow.
This forced me to rethink a fundamental question: in the age of AI, which layer of the tech stack should you actually own?
Stripe Has No Dashboard, But Nobody Can Clone It
Story one: Stripe. You’ve used Stripe’s payment page, but you’ve probably never “logged into Stripe’s product” to do your daily work. Stripe didn’t build a pretty dashboard for merchants to open every morning — it chose to be a payment API. Merchants use their own interfaces and call Stripe’s capabilities.
What Stripe built is far less than what Traction has — no frontend, no user system, no responsive design. But its payment network and compliance moat mean nobody can use AI to replicate a Stripe. It built less, but is far more irreplaceable.
Story two: Notion. Notion is a complete product — UI, editor, collaboration features, hosting. By that logic, AI should easily be able to clone a Notion.
But try it: you can use AI to clone Notion’s editor, but you can’t clone the 3 years of notes someone has in Notion. The data users accumulate is Notion’s real moat. And Notion smartly added an API that lets AI tools access its data — the AI ecosystem actually reinforces its defenses.
Both stories point to the same insight: a product’s defensibility isn’t about how many features you build — it’s about which layer of the stack you own.
The Stack Ownership Spectrum
I organized this thinking into a framework I call the Stack Ownership Spectrum — ranging from “you provide everything” to “you provide just a piece of callable code”:
L1 Full-Stack Delivery You provide UI + logic + data + hosting
-> Notion, Figma, my Traction
L2 With Interface You provide UI + logic, users may bring their own model
-> Slack Bot, embedded Widget
L3 No Interface You provide logic + data, no UI
-> Stripe API, Twilio, OpenAI API
L4 Tool Interface You define tools, AI handles orchestration and invocation
-> MCP Server
L5 Pure Capability You provide just a piece of callable code
-> npm package, CLI tool, Agent Skill
My Traction sits at L1 — I provided the entire stack, bore all the costs, and every layer is replicable. Stripe sits at L3 — no UI, but the core value is irreplaceable. Notion is also L1, but data lock-in keeps it rock-solid.
Where you sit on the spectrum doesn’t matter. What matters is whether you have something at that position that nobody can take away.
So why has this spectrum suddenly become important?
Three Things Happening Right Now
AI Can Build UIs, So UIs Are Losing Value
This is something I felt firsthand with Traction. When AI can instantly generate a custom dashboard, your carefully designed UI transforms from a selling point into a cost center — you’re still paying development and maintenance costs for something that no longer provides competitive advantage.
So “SaaS is dead”? No. SaaS built around UI as core value is losing its moat; SaaS built around data as core value is actually getting stronger. If you have a data-driven SaaS, don’t panic — add an API layer or an MCP interface so AI can access your data.
The Consumer of Software Is Changing: From Humans to AI
The old value chain looked like this:
You build a product -> User visits your website -> User operates the UI -> Value delivered. Every step has friction, every step loses users.
MCP (Model Context Protocol) is building a new value chain vision:
You define capabilities -> Register with an MCP registry -> User’s AI automatically discovers and calls them -> Value delivered. Auto-discovery and registration mechanisms are still early, but the direction is clear: no UI friction, no signup flow, users might not even know they’re using your service.
How is this different from APIs? APIs target human developers — they need to read docs and write integration code. MCP targets AI agents — they just need to read the schema to use it. APIs are consumed by humans; MCP is consumed by AI.
And this trend goes even deeper. Imagine: you have your own local AI — it knows your files, your preferences, your work habits — and it discovers and calls various tools on its own. You no longer need to install apps one by one, create accounts, or learn UIs. Your AI does all of that for you.
This isn’t science fiction. Projects like OpenClaw are already exploring a “Chat-native Skill” model — users load capabilities within their own chat environment, bring their own AI model, and developers don’t even bear inference costs. OpenAI is reportedly exploring a similar Skills system, letting users load modular capabilities via slash commands. This direction means: your product may not need to be “opened” by users — it just needs to be “loaded” by their AI.
MCP is a toolbox for AI; Skills are an operating manual for AI — they’re complementary, not competing. But they point to the same conclusion: the consumer of software is shifting from humans to AI, and your product needs to be ready for that.
And that’s just the first step. Think about Dependabot: you configure it once, and it automatically checks dependency updates daily, opens PRs, and tells you about risks. You never “log into Dependabot” or “use Dependabot’s UI.” That’s the autonomous agent model — the user sets it up once, the AI runs continuously, pushing results on demand. It doesn’t need to compete for user attention or require repeated logins. This enables more natural value-based pricing: you pay for outcomes, not for “seats.”
If autonomous agents go mainstream, users will have less and less patience for “logging into a website and operating a UI.” This won’t kill all products, but it will continuously push value from “interface” toward “capability” — so does that mean going lighter is always the right move?
That said, let me be clear: both the MCP and Skills ecosystems are still very early, and monetization models are immature. The prudent strategy is to test the waters at low cost (an MCP Server might be just a few hundred lines of code, a Skill might be just a markdown file) rather than going all in.
But Let Me Pour Some Cold Water
By this point, you might think the conclusion is obvious — go light, build APIs, MCP Servers, CLI tools, and you’re set.
I thought so too at first. Then I asked myself an uncomfortable question:
If an MCP Server only takes a few hundred lines of code, how is that fundamentally different from “Cursor cloning a dashboard in an afternoon”?
The answer: It isn’t.
npm has over 2 million packages, and the vast majority go unused. A lightweight tool — whether MCP Server or CLI — without proprietary resources behind it is just as fragile as a generic SaaS. Maybe even more fragile, since it doesn’t even get brand exposure.
This is why I keep saying where you sit on the spectrum doesn’t matter — what matters is whether you have something at that position that nobody can take away:
- Notion survives at L1 because users’ 3 years of notes can’t be migrated easily. Data lock-in is stronger than any UI
- Stripe survives at L3 because the payment network isn’t a code problem — it’s a business relationship problem
- An industry-data MCP Server survives at L4 because the value is in the data, not in those few hundred lines of code
- But a pure-algorithm npm package at L5 is the most dangerous — today’s hot package could be tomorrow’s built-in Claude capability
Going right isn’t the answer. Building proprietary resources at whatever layer you choose — that’s the answer.
So How Should You Choose?
Don’t start from “what form should I build?” Start from “what do I have?”
If you have proprietary data (user-accumulated content, industry-specific datasets) — build an API or data service (L3), and add an MCP interface so AI can call it. Your data is your moat.
If you have proprietary technical capability (algorithms others can’t match) — build an MCP Server or CLI tool (L4). But be honest about how “proprietary” it really is — if the open-source community can catch up in three months, that’s not proprietary.
If you have taste and audience (personal brand, content ability, a group of people who trust you) — do content + capability output. Taste is the thing AI is worst at replicating.
If you don’t have any of the above yet — build an L1 product first, accumulate data or audience through the process, then consider migrating. “Start light” sounds appealing, but a lightweight product without proprietary resources is just as fragile as a heavyweight product without them.
Back to My Own Situation
I analyzed myself using this framework.
Honestly: I don’t have proprietary data, and I don’t have proprietary algorithms. What I have is sustained thinking about AI product strategy and the ability to turn that thinking into content — taste and expression.
So my core strategy is blogging and newsletters (which you’re reading right now): building brand, building SEO authority, accumulating trust. Taste and perspective are what AI is worst at replicating — AI can generate any opinion, but it can’t generate the perception that “this person has been in the trenches, has shipped products, and is worth listening to.” Around this core engine, I distribute the same set of capabilities through different forms: open-source CLI + MCP Server for the developer community, background AI pipelines (demand mining, trend monitoring) running continuously to feed the content machine.
What Traction taught me isn’t “SaaS is dead.” It’s a more specific lesson: building an L1 product without proprietary data is the most vulnerable position you can be in. Next time I build a product, my first question won’t be “what features should I build?” — it’ll be “at this layer, what do I have that nobody can take away?”
FAQ
Q: Is it worth building an MCP Server right now?
Worth testing, not worth going all in. The risk is extremely low — we’re talking a few hundred lines of code. The most promising direction right now is MCP Servers that connect to proprietary data sources (industry databases, internal enterprise systems, APIs requiring authorization), because pure-logic MCP Servers are too easy to replicate. There are three monetization paths: open-source for free to build brand -> hosted Pro version with paid tier, API key usage-based billing, or licensing to larger platforms. Ship a free version first to validate demand, then decide whether to double down.
Q: Is AI’s impact on product forms being overhyped?
Possibly. Regulation might constrain autonomous AI agents, user privacy anxiety might outweigh convenience, and model capabilities could plateau. But even if the impact is only half of what’s predicted, “value migrating from UI to data” and “AI becoming the primary software consumer” still hold true. Building things that are “useful if AI gets stronger, still useful if AI stalls” is the best hedge — proprietary data has value in any scenario, and APIs work regardless of whether the caller is human or AI.
Q: How do I figure out which layer my product is on?
Ask yourself three questions: (1) Can users bypass your UI and get the core value directly? If yes, your UI isn’t a moat. (2) Is your core value logic or data? Logic is easy to replicate; data isn’t. (3) Can an AI agent call your service directly? If not, you’re missing a growing distribution channel. Connect the three answers and your position — along with your migration direction — becomes clear.
If you found this useful, I share ongoing thoughts on AI product strategy and indie development on Twitter.