Ilia Badeev
Contributor

Giant AI models and the shift to specialized AI

Turns out, you don’t need a trillion-parameter model to get things done. Smaller, focused AIs are stealing the spotlight for real business impact.

swiss army knife tool multi task project management survival
Credit: Thinkstock

When it comes to artificial intelligence, bigger often looks better. Tech giants tout models with billions — or even trillions — of parameters, promising that these digital behemoths can do everything from solving equations to writing code to producing near-scientific research. The idea behind the promise is clear: If you want cutting-edge results, reach for the cannon.

But bigger is not always better (no pun intended…) — as bigger usually means increased complexity and reduced flexibility. Slowly, companies are beginning to realize that a trillion-parameter model isn’t always the best solution for their business — not every AI solution needs a giant LLM. A more focused approach bears a promise of leading to better results.

Small and specialized models tuned for specific tasks on relevant data are gaining traction. Fewer resources and better customization and control — what’s not to love? But there seems to be a misalignment between the actual beneficial outcome and the promise of the giants.

The giant issue of the giants

Since the release of ChatGPT in November 2022, the models have gotten only bigger and bigger. Despite all the costs of training, development and inference associated with bigger models, they still give results. The math is simple: take a big model with more parameters, put it on more GPUs, give it some time and you will see results. To simplify, “load” more money into it and get better results. The more money, the better the results. All the tech giants (OpenAI, Google, Anthropic, Meta) have been playing this game for the last 5 to 7 years.

  • 2018: GPT-1 and BERT — both <1B parameters
  • 2019: GPT-2 — 1.5B parameters
  • 2020: GPT-3 — 175B parameters
  • 2023: GPT-4, Claude, Gemini Ultra — all massive models
  • 2024–2025: Llama — 405B, DeepSeek — 671B

The trend is clear. And it works. Research from the Australian Institute for Machine Learning shows that “increasing the parameters is 3x more important than expanding the training set size to train larger models.

But there is a big problem with this approach.

Let’s be clear: LLMs are generalists. And while big models give good results, small models can give the same or slightly better results on specific tasks for a fraction of the time and cost.

Worse, LLMs are slow. More neurons must fire, which means longer execution time and high infrastructure costs that not everyone can afford.

But what’s great with big models? They’re exactly like a Swiss Army knife — they can perform nearly any task and they give you results. But many businesses simply can’t afford this at scale. Plus, the daily grind of business isn’t about scientific discovery; it’s more about repetitive, mid-level tasks, such as summarizing meetings, analyzing Jira tickets or drafting reports.

Most companies have a real business process to simplify and a business problem to solve, for which you do not need a Swiss Army knife — you need a surgeon’s scalpel instead. A sharp, defined tool that can perform one and only one task, but with maximum precision. Not a one-size-fits-all.

Unlike their hulking counterparts, small language models (SLMs) are lean, precise and domain-focused. They’re cheaper, faster and accurate within their niche. For example, a compliance firm might deploy a lightweight model trained on regulations and internal policies. A healthcare provider could fine-tune a smaller system to interpret lab results and patient notes with pinpoint accuracy.

OpenAI provides a great real-world example in their official documentation, which states that by fine-tuning GPT-4o-mini for a very specific task on 1,000 examples, one can achieve 91.5% accuracy (on par with the bigger version 4o) for only 2% of the price. Don’t forget that the inference speed will also be much faster.

Running massive models for everyday business tasks — like monitoring customer reviews across Amazon, Reddit, YouTube or X — quickly proved inefficient. Why use a billion-parameter Swiss Army knife to summarize simple comments when a leaner, task-specific model can do it faster, more reliably and at a fraction of the cost?

Blinded by the buzz

So why are organizations still flocking to LLMs? Two reasons: marketing hype and human psychology.

Marketing hype

Tech giants compete in the race for AGI (artificial general intelligence) and, by definition, it won’t be an SLM. The stakes are high and the reward is even higher. They push their largest and flashiest products, selling the dream of a universal brain to raise more attention, investments and talents around their product. They are developing a digital Albert Einstein. But you, as a customer, don’t hire Albert Einstein to solve your 5th-grade math problems, right?

Human psychology

We anthropomorphize intelligence and tend to humanize AI. Just like most people are inclined to assume that a genuinely smart person is good at everything, we think the smartest model will be the best for any job. But it’s not true. Sometimes, small but properly trained models get better results in the trained domains. Take Microsoft’s Phi-4 as an example, which dominated the math reasoning field, having “only” 14-B parameters. Another example: Med-PaLM gets >60% in US Medical Licensing Examination (USMLE) and is highly applicable in the real-world medical field.

Tech giants are in the business of spectacle: the bigger and flashier their models, the more media coverage they attract. The more headlines they dominate, the more prestige they accumulate. For executives who don’t live in the weeds of AI development, that kind of status symbol promise is intoxicating — it feels safe (for all the wrong reasons) to invest in a known, loudest voice, especially if all competitors are doing more or less the same. Follow the crowd; get lost in it.

Indeed, I’ve got to admit that the combination of marketing hype along with the human bias creates a powerful illusion that suddenly, right at your fingertips, the universal AI brain is at your disposal. However, this often creates overspending and underperformance. When you can purposefully choose a hand-held nutcracker to crack a few walnuts, using a sledgehammer is neither the best nor the wisest decision. It is not just wasteful — it’s a poor strategy.

Particularly since AI doesn’t actually fix problems. Quite the contrary — it magnifies them.

AI as amplifier, not savior

If you have bad processes, AI will make them 10 times worse. If you have good processes, it will make them 10 times better, faster and more efficient. Consider customer support: companies rush to plug LLMs into bots, only to discover poor results. The real culprit? Outdated, incomplete or human-dependent knowledge bases.

Banks and insurance firms are opting for smaller models hosted on private clouds, prioritizing security and regulatory compliance. Retailers are using mid-size AI to scan product reviews and social chatter for trends, cutting costs dramatically compared to running GPT-scale systems.

I am not saying that it is worth considering abandoning large models entirely. They’re valuable for broad reasoning and innovation. But from my experience, companies love to start with optimization. First, you should start with achieving the result, whatever the cost. Learn how to do it right. Then optimize. In other words, start big, then go small. Test your task on a large model first, not for long-term use, but to establish what success looks like to you and your business. Once you’ve defined clear prompts, outputs and expectations, transition to a smaller model and fine-tune it. Also, it will be useful to check this AIMultiple analysis of specialized LMs.

If we have to make a dummy version observation, I would say that large foundation models are for broad training and context (which you ought to have in place first). Smaller domain-specific models are for execution (once you narrow down your problems). The future of industrial (practical) AI isn’t a single giant brain in the cloud — it’s an ecosystem of specialized minds working together.

None of this means the giants will vanish. They still matter — for exploration, cutting-edge analysis and creative problem-solving. But they won’t be the workhorses of business. AI is just the cherry on top. It can be a cherry on top of a cake — or a cherry on a pile of crap. Without proper processes and data governance, even the most advanced model won’t magically solve your problems.

The takeaway? Before buying the cannon, ask yourself: “Is your organization truly facing a frontier problem — or just trying to summarize a meeting? Do you really need Einstein to solve your problems?”

Disclaimer: This article is for informational purposes only and does not constitute professional advice. Organizations should consult with legal and technical experts before implementing AI systems. Trevolution Group makes no warranties about the completeness, reliability, or accuracy of this information.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Ilia Badeev

Ilia Badeev is the head of data science at Trevolution Group — one of the world’s largest travel groups behind brands like ASAP Tickets, Skylux Travel, Dreamport, Triplicity, Oojo and others. He spearheads the group’s global AI strategy, driving innovation across airline ticketing and travel services. With advanced expertise in Python, TensorFlow, AWS and Kubernetes, Ilia transforms complex data into high-impact, real-world solutions.

More from this author