Beyond enterprise SaaS: Agentic AI and the dawn of autonomous enterprise

Sergii Gorpynich

by Sergii Gorpynich

A 2025 MIT Sloan Management Review survey found that 35% of organisations are already using agentic AI, with another 44% planning deployments, yet roughly half still lack a clear enterprise AI strategy for what they are actually trying to achieve. That gap between rapid adoption and intentional direction should feel familiar.

Everything in business moves in cycles: we have already lived through the shift from analog to digital, from on-premise to cloud. For young people today, computers and mobile phones are simply the default. AI is on the same trajectory. What feels disruptive and experimental today will quickly become the baseline expectation for how enterprises operate, shaping the future of SaaS.

The SaaSpocalypse and the concentration of value creation

The February “SaaSpocalypse” led many to predict the end of SaaS, but that completely misreads how AI in enterprise software actually works. AI is not going to replace enterprise software and industrial automation platforms anytime soon (there’s a great blog from Andreessen Horowitz on this); those systems remain the backbone of transactions, data, and compliance. What changes is how value is extracted from them and the overall pricing model. 

For example, instead of 2,000 salespeople each having a licence to a CRM platform, a smaller number of specialized agents will read and write to that CRM via APIs, orchestrate workflows, and surface next-best actions and the salespeople simply review and act on their output. Enterprises still need Salesforce or SAP as databases and workflow engines, but they may need far fewer human licenses and far more agent-facing, usage-based access, compressing traditional per-seat monetization even as overall utilization of the underlying platforms goes up.

That said, we cannot ignore the fact that there is an extreme concentration on value in the technology and AI industry with a handful of hyperscalers and model owners (most are private companies) control compute and storage, and a small number of model providers control the most capable foundation models, while thousands of SaaS vendors compete for thinner slices of workflow logic on top. 

The "winner takes most" nature of platform-based business models has never been more pronounced and today's model holders have a structural advantage that previous platform waves did not. Wrapping AI onto an existing solution is now remarkably easy, especially as AI-native engineering makes it dramatically cheaper and faster to build tailored capabilities on top of established data and distribution. Incumbents with scale, proprietary data, and existing customer relationships can extend their moats faster than challengers can build them.

But this dynamic also signals something deeper than a competitive reshuffling. It marks a fundamental shift in where value actually lives. We are moving from tools that humans use to AI automation that delivers outcomes directly with humans supervising at the edge rather than executing in the middle. 

The implication for enterprise software is profound: the application layer that was once the primary surface of human work becomes, increasingly, the operating layer of autonomous agents. Instead of users clicking through workflows, agents perform them. Instead of software enabling people to act, software becomes the environment in which agents act on behalf of people.

This reframes the entire logic of the ‘buy vs build’ software dilemma. The strategic decision is now about which platforms are architected to support agents operating at scale, and whether the outcomes those agents deliver are ones you can govern, monetize and build a business model around.

The risk most enterprise technology strategies have not yet reckoned with is this: the same vendors that own your systems of record – your data, your workflows, your compliance logic – are now building the agent orchestration layer on top of those same foundations. If that consolidation succeeds, you will not just have a software dependency. You will have outsourced the architecture of your entire future operating model to a small number of platforms you do not control.

Domain-Specific Language Models (DSLMs) will displace general LLMs as the default enterprise intelligence layer

Domain Specific language models

As SaaS evolves, the intelligence layer powering enterprise software is changing too. The assumption that a handful of giant general-purpose LLMs will sit at the center of every workflow is already giving way to something more practical: an ecosystem of domain-specific language models trained on the data, processes, and decision patterns of a particular industry, function, or firm.

This shift is not only about specialization. It is about fit. Recent research from NVIDIA on agentic AI argues that most agentic tasks are not broad conversational challenges but repetitive, scoped, and tightly structured operations, which makes general-purpose LLMs both operationally excessive and economically inefficient for many enterprise uses. In these settings, smaller specialized models are often sufficient and increasingly preferable because they deliver lower latency, lower compute cost, and more predictable behavior while still achieving strong performance on tool calling, instruction following, constrained reasoning, and code generation.

The economics are becoming hard to ignore. The research finds that serving a 7B model can be 10 to 30 times cheaper than serving a 70B to 175B model, while fine-tuning smaller models can take only a few GPU-hours, making it feasible to adapt domain behavior overnight rather than over weeks. It also shows that recent small models now match or exceed much larger systems on several agent-relevant benchmarks, suggesting that for enterprise workflows, capability is no longer defined by parameter count alone.

Anthropic’s legal agent is an early signal of this direction. Rather than forcing enterprises to engineer around the limitations of a general model, the market is moving toward packaged intelligence designed to interpret domain-specific content, reason within bounded processes, and operate with the precision that high-stakes environments demand.

The strategic implication is significant. In the AI era, the moat will likely come from proprietary workflow data combined with the operational discipline to fine-tune, govern, and continuously improve models that reflect how the business actually works. Agentic systems generate exactly the kind of structured usage data needed to do this. Over time, enterprises that invest in curating their data and training specialist models around core workflows will build compounding advantages in cost, speed, and accuracy that generic models will struggle to match.

Specialized models make intelligence operational. DSLMs turn intelligence from a generic capability into a repeatable operating mechanism. And once that happens across enough core processes, the question is no longer how AI supports the enterprise, but how the enterprise itself begins to run as an adaptive system.

A new strategic imperative: autonomous enterprise 

The future of enterprise is autonomous, and we can already see its contours in how leading businesses are redesigning around agents. These are structural signals of a system that learns, adapts, and corrects in production. Put differently, an autonomous enterprise exhibits three properties at the operating model level:

  • Self-learning: the enterprise systematically treats its own operations as training data — logging agent interactions, outcomes, and exceptions, and feeding them back into models, prompts, and workflows. The system improves from use rather than waiting for the next transformation program.
  • Self-adapting: The enterprise has the capabilities at the system level so that workflows can be reconfigured dynamically – new agent, new policy, new routing logic – without re‑engineering everything from scratch each time the environment shifts.
  • Self-correcting: The enterprise builds robust feedback loops into everyday work so that the impact of actions is regularly surfaced, interrogated, and used to adjust decisions iteratively, shrinking the gap between learning and delivering concrete outcomes over time.

This may sound conceptual but we have seen similar shifts before. Take Amazon. In its early years, technology simply supported an online retail business. Over time, the logistics algorithms, personalization capabilities and supply chain automation became so central that the system itself became the competitive advantage. Today Amazon is a technology system that runs commerce at global scale. The system is effectively the business.

In an autonomous enterprise, systems are no longer just tools that support the organization. They become an organizational capability in their own right. The system learns, adapts, and corrects alongside the people who use it. Over time, the boundary between “the business” and “the system” collapses. The system becomes the business.

How to go from static to autonomous

For most organizations, the journey to an autonomous enterprise is an architectural and cultural shift and to navigate it successfully, autonomy should be designed backward from your strategic goal. There are four practical moves I would recommend.

1. Define your autonomy endgame

Before deploying more agents or models, leadership must answer a harder question: what is our strategic Endgame in an autonomous future? Which decisions should the organization increasingly make for itself? Where must human judgment remain central? How do speed, resilience and adaptability become measurable competitive advantages?

2. Start with measurable impact

Once you are clear on where you want to go, identify one or two high-volume, well-instrumented domains where outcomes are clearly measurable so employees, customers and the Board can tangibly see the benefits. This is less about showcasing sophisticated AI and more about building organizational trust in autonomous behavior, proving that it can be safe, measurable and extended over time. Implement agentic workflows in these areas that can plan, execute and self‑evaluate within clearly defined policy boundaries.

3. Collapse the gap between orchestration and execution

To move toward autonomy, you must close that gap between orchestration and execution. Start by allowing agents to take constrained, reversible actions under human approval, then gradually expand their scope as confidence grows. Similar to any previous technological evolution, the biggest barrier for becoming an autonomous enterprise is internal resistance. This is where many organizations will feel the biggest mindset shift: accepting that “the system” is not just reporting on what humans do but acting on behalf of the organization under carefully designed constraints.

4. Design for continuous evolution

Autonomous enterprises are, by definition, always in motion. Treat your platform as a living ecosystem: open interfaces, modular components, and a clear way for teams across the business to contribute new agents and tools. You are designing a system that can keep redesigning itself faster than your competitors can redesign theirs.

Share

AI Agents and the Autonomous Enterprise R2mq5pb9m
Sergii Gorpynich
CTO & co-founder at Star

With MS degrees in engineering and business, Sergii is a lifelong student of the ways technologies enrich human lives. Sergii is a co-founding Chief Technology Officer at Star, leading our Engineering organization across the full domain of industry, technology, and service delivery offerings.

Harness the future of technologies

Star uses top-notch technology solutions to create innovative digital experiences for our clients.

Explore our work
Loading...