Learning Capitalism - How AI is separating thinkers from followers

There is a strange reality around us. Artificial intelligence is becoming the most powerful tool ever created by humanity. It can write, code, design, plan, summarise and even think ahead for us. As Sam Altman said, "its like carrying a bunch of PhD level experts in your pocket". Everyone talks about how AI will make humans smarter, faster, more capable. But when I look closely, it's different. It's a strange effect which I previously never imagined.

I always knew human behaviours were being cognitively altered by apps, but not once did I think AIs would make humans dumber. There is an evident paradox which is pushing the limits of human intelligence creating a strong divide between the thinkers and the followers. As humans our brain needs consistent activation of cognitive muscles to make ourselves smarter. Smarter does not mean a directly proportional IQ. It means the ability to learn and evolve as per our surroundings. This increases the brain plasticity which is one of the most important things that makes humans better than other creatures. 

But with AI, people have started to outsource thinking. A similar phenomenon occurred when calculators were invented. People stopped doing basic mental math and lost relative cognitive ability, while a small group of people focussed on more intelligent math problem solving. Another very interesting example is GPS, where people stopped remembering routes. In all these inventions the common aspect was that majority humans outsourced their thinking and took help of the inventions to reach an outcome. But with AI it's different, as AI isn't doing one thing, it's able to do many things in cognitive synergy resulting in a similar thinking capability like a human brain. While the top 1% of people use AI to enhance their cognitive ability, the majority are falling into the trap of using AI for any question that strikes their brain. 

Over time, this creates two very different types of people. A small group who use AI as leverage to push themselves harder, and the large majority who let AI replace their effort completely.

This is why I call it learning capitalism. Just like in economic capitalism, where resources accumulate to those who know how to use money well, intellectual power will accumulate to those who know how to use AI well. The fittest brain workers will grow sharper because they combine their own thinking with AI’s power. The rest will slowly fall behind, becoming dependent consumers of intelligence rather than producers of it.

The majority of people offload their primary cognitive skill while a minority built new levels of mastery on top. This magnifies this divide a hundred times more between average and the top learners.

The paradox is that, AI should have been a great equaliser for the masses, as it gives everyone the ability to learn, create and build. Yet the more powerful it becomes, the more it risks creating an intelligence inequality unlike anything we have seen. A few people will become cognitive athletes, training their minds every day with AI as a sparring partner. Everyone else will become sheep, blindly following whatever output the machine gives them.

The consequences of this are exponential. Work will not only be divided between skilled and unskilled but between those who can think and those who cannot. Entire careers will disappear for people who are unable to add original thinking on top of AI’s answers. Meanwhile, those who continue to think deeply, to challenge themselves, to use AI as an amplifier rather than a crutch, will build the next generation of companies, products and ideas.

I believe the future of personal growth depends on how you relate to AI. If you treat it like a sofa, it will make you comfortable and weak. If you treat it like a gym, it will make you strong. Asking it to do your work is easy. Using it to sharpen your own thinking is hard. The difference between the two will define who thrives in the next decades.

Intelligence is no longer evenly distributed because of access to books or schools or the internet. Intelligence is now a choice. It is the decision to keep exercising your brain in the age of effortless answers. AI will not kill thinking. People will kill their own thinking by choosing convenience over struggle.

So the question is not whether AI will make humanity smarter or dumber, but whether we choose to be thinkers or followers, whether we choose to become Learning Capitalists in our lives and careers.

Why most AI startups won't survive?

Every week there’s a new AI startup on X, Linkedin or Product Hunt. Another wrapper around an LLM. Another shiny demo that calls someone else’s API. Founders raise a seed round, go viral for a day, and then make quick revenue and will vanish.
The truth is uncomfortable, if your company is just an API call to OpenAI or Anthropic, you don’t own anything. You don’t own the moat, the data, the infrastructure, or the economics. You are just a middleman, and middlemen don’t survive long.

The platform always eats the layer above it

History repeats.
  • If you built a business on top of Facebook pages, Facebook killed your reach.
  • If you built on top of Twitter bots, Twitter closed the API.
  • If you built on top of Shopify plugins, Shopify cloned you.
Why would AI be any different? Right now, startups are building clever UIs around APIs. But the model creators are not your partners, they are your landlords. At some point they raise rent, change rules, or release the same feature natively.

Who actually wins?

Four kinds of companies survive this wave:
  • The model builders. The people training foundation models with compute, data, and talent. This is capital intensive and brutally hard, but it’s where the real moat lives.
  • The infrastructure players. Cloud, GPUs, fine-tuning platforms, data labeling, orchestration layers. These companies sell the picks and shovels for the AI gold rush.
  • The distribution moguls. Startups who have cracked the global distribution game through virality, network or capital will make it big.
  • The real problem solvers. There is no replacement for true customer focused problem solving. If you built a solution that customers need and use AI to make the solution better. You will survive any hype cycle.
Everyone else? They’re features waiting to be absorbed.

But isn’t there room for applications?

Yes, but the bar is much higher than a wrapper. If you’re building an “AI startup,” you need one of:
  • Proprietary data that no one else has.
  • Distribution that no one else can match.
  • A workflow so deeply embedded that replacing you feels impossible.
If you don’t have these, you’re not a company. You’re a thin UI on someone else’s infra.

The hype cycle is brutal

Investors are learning fast. They funded clones, wrappers, gimmicks. Now they want defensibility. They want to know why the platform won’t kill you in 12 months. Most AI startups can’t answer that. 

Model builders will always say wrappers are ok, because thats how they make the money. 

The uncomfortable advice

If you’re building in AI today, ask yourself honestly:

  • What do I control that the API provider can’t take away?
  • What can I defend if the model costs drop to zero or features become free?
  • Am I building something with a moat, or just a feature?

Because here’s the reality, when the dust settles, the only survivors will be the ones who either own the model or own the infrastructure everyone depends on. Everyone else will be a footnote in the history of another hype cycle.