- AI Business Insights
- Posts
- 36,000 Custom GPTs? The New Standard.
36,000 Custom GPTs? The New Standard.
One GPT Per Project
I used to think “having an AI” meant one chat window and a bunch of copy-pasted context. Then I read this creator’s breakdown and felt a little behind. Because the era of using a single, generic AI chat for all your work is officially dead. What replaces it isn’t one smarter bot. It’s a crowd of specialized bots, each built for a narrow job, with the right context already loaded. And once you see why, it’s hard to unsee.
We often picture AI as one assistant we talk to, but the biggest firms are treating it more like a workforce. I came across an insightful Reddit contributor who dug into Boston Consulting Group’s decision to deploy 36,000 custom GPTs for roughly 32,000 consultants. That’s more than one AI model per person, which sounds absurd until you run the math. McKinsey is reportedly moving the same direction, targeting tens of thousands of agents as well.
Turn AI Into Extra Income
You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.
From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.
*Ad
The Logic Behind the Numbers
At first, “36,000 bots” sounds like a tech flex. But the author makes a simple point: a global consulting firm runs thousands of projects, and each one has different data, goals, and constraints. One general AI has to relearn the world every time you switch clients.
So instead of forcing a generic model to understand every engagement from scratch, they build a dedicated AI per project or per role within the project. If a single engagement needs a pricing expert, a market researcher, and a strategy writer, that can be three different GPTs right there. Multiply that across thousands of active and overlapping projects, and the number stops looking crazy.
The mindset shift is the real headline. AI isn’t just a browser tab anymore. It’s starting to look like the underlying infrastructure of knowledge work, like an internal operating layer that sits under the team.
Specialization Wins
This Reddit breakdown emphasizes something most teams ignore: access to a chat interface is not the same as having useful tools. BCG isn’t handing everyone the same blank prompt box and calling it transformation. They’re building role-specific GPTs that behave like specialists.
Think of it like hiring. A generalist can help in many places, but when the system is complex, you want the master electrician, not the handyman. These GPTs are trained on internal frameworks for strategy, operations, marketing, and the way the firm actually works. That matters because the AI doesn’t have to be “taught” the firm’s methodology every time someone opens it.
When you ground these bots in real frameworks and institutional know-how, you reduce errors and you get output that sounds like the company, not like generic AI. In other words, the IP isn’t just in slide decks anymore. It’s embedded in the assistants people use every day.
The Practical Fundamentals
There’s a lot of noise about “autonomous agents” that supposedly run everything end to end. The creator’s point is that BCG started with what actually works right now. Not sci-fi. Fundamentals.
The core value is memory and reusability. In most companies, a great AI conversation dies in one person’s chat history. Nobody else benefits, and the next person repeats the same work. A custom GPT changes that because the instructions, context, and reusable “brain” of that work get saved as an asset.
That means a team member can leave a project and a new one can jump in, and the AI still holds the context. It turns one-off chats into durable infrastructure and reduces the “knowledge silo” problem where good prompts and good thinking stay trapped with one person.
Other awesome AI guides you may enjoy
Scaling Is the Hard Part
Once you accept you may need dozens, then hundreds, then thousands of bots, the challenge shifts. It’s no longer “how do I prompt well?” It becomes “how do I manage a fleet?”
Creating, updating, assigning, and version-controlling thousands of GPTs is a logistical mess if you do it manually. The Reddit contributor argues that tools designed for cloning and deployment (they mention options like GPT Generator Premium) become valuable simply because they help you replicate what works. You want to clone a successful GPT, tweak it for a new client or team, and ship it fast.
This is the factory mindset. You’re not experimenting. You’re building a system that produces intelligence on demand, and the bottleneck becomes operations, not model quality.
How to Replicate the BCG Model
You probably don’t need 36,000 bots tomorrow. But you can steal the logic immediately.
Start small: pick one critical role or one repetitive project type.
Build one bot: create a robust custom GPT for that task with your best instructions and frameworks.
Test and clone: run it for a few weeks, then clone and refine it for a new team, client, or workflow.
The goal isn’t a big number. It’s organizational capability. The winners won’t be the ones with the flashiest tech later. They’ll be the ones who started packaging their knowledge into reusable AI tools early.
Bonus

Credits to Adam Biddlecombe
