Skip to main content
AI & Operating Model

The Production Function Is Changing — And Most Operating Models Haven't Caught Up

April 06, 2026
The Production Function Is Changing — And Most Operating Models Haven't Caught Up

There is a useful question to ask about any company, no matter what it does, and very few leaders ask it explicitly: what is your production function, and how is it changing?

A production function is the system that converts inputs into outputs. Information becomes decisions. Labour becomes products and services. Capital becomes growth. Every company has one. Every company has been built around assumptions about how it works. And right now, for the first time in several decades, the underlying assumptions are starting to break.

This is not a hot take. It is the central observation that determines whether your AI enablement strategy is actually a strategy or just a tools roadmap. Most enterprises are operating with a production function that was designed for a world in which humans drove every meaningful decision and software was an enabler. AI is now starting to take parts of that loop in a way that previous technology shifts did not — and the operating models, processes, role designs, governance structures, and incentives that worked under the old assumptions stop working under the new ones. The institutions that internalise this earliest will design themselves around the new production function. The ones that don't will keep retrofitting the new technology onto the old shape and wondering why the gains don't compound.

This post is about why the production function is changing, why previous tech shifts didn't change it but this one does, and what that means for operating model design over the next decade. It is the structural argument behind our AI Enablement service and the foundation for everything else we publish on this topic.

Every company is a production function

Start with the basic framing. A company, viewed from a sufficient distance, is a system that takes inputs and turns them into outputs.

  • Inputs: customer signals, regulatory requirements, capital, labour, raw materials, market data, internal information.
  • Outputs: decisions, transactions, products, services, customer experiences, risk positions, financial results.

The company exists because the conversion is more efficient inside an organised structure than outside one. This is the basic Coase insight, and it has been the foundation of how leaders think about competitive advantage for half a century. Different companies are good at different parts of this conversion. Some are good at converting customer signals into product decisions. Some are good at converting capital into operational scale. Some are good at converting regulatory requirements into compliance posture. The shape of the production function — the way the conversion happens — is a substantial source of strategic differentiation.

For the better part of the last several decades, the shape of this production function has been human-centric. Software has played a supporting role: it stored data, moved data around, automated narrowly-defined tasks. The actual interpretation of information, the judgment about what to do, the execution of the decisions — the load-bearing parts of the production function — sat with people. This was true at the largest banks and at the smallest retailers, in different ways but with the same fundamental shape.

Even the major technological shifts of the last 30 years preserved this basic structure. The internet expanded access to information, customers, and markets — but it did not change who interpreted the information or who made the decisions. Mobile expanded the locations from which work could be done — but it did not change who was doing the work. Cloud expanded the scale at which software could run — but it did not change the underlying assumption that software was an enabler of human decision-making rather than a substitute for it.

Each of these shifts was transformative. Each of them produced winners and losers at the scale of entire industries. None of them changed the fundamental shape of the production function. Humans were still at the centre of every meaningful loop. Software was still the supporting infrastructure.

AI is the first thing in this sequence that begins to break the pattern.

What AI does that previous tech shifts didn't

The reason AI is structurally different is not that it is "smarter" than previous software. It is that it does three things previous software fundamentally did not.

It generates information, rather than just storing or retrieving it. A model can produce a draft, a summary, a recommendation, an analysis, a classification — new artefacts, on demand, that did not exist before the model produced them. Spreadsheets calculate what they're told to calculate. Databases retrieve what's been stored. Search engines find what already exists. None of them generate. AI does.

It proposes actions, rather than waiting for instructions. An AI system can look at incoming data and decide that something should happen — a case should be escalated, a customer should be reached, a parameter should be adjusted, a transaction should be flagged. That is a fundamental change from "the user clicks a button" to "the system knows what to do next."

It participates in decisions, rather than just supporting them. Risk scoring, lead prioritisation, customer routing, fraud flagging, regulatory filtering — these are increasingly model-driven decisions, not human-judgment-with-supporting-data. The decision moves from the human into the system, with the human moving toward an oversight or exception-handling role.

These three shifts — generation, proposal, participation — are why AI is structurally different. They change who or what sits at the centre of the production function. And once you change that, everything around it has to change too.

This is the part most enterprise AI strategies miss. They treat AI like it is cloud, or mobile, or the early internet — as a faster, cheaper, better way to do what they already do. So they layer it onto existing workflows, measure local efficiency gains, and conclude after 18 months that "AI is more incremental than people thought." This is wrong. The technology is fine. The implementation is being applied to a system that was never designed for it.

The operating model lag

Here is the core problem in a single sentence: the technology is moving faster than the operating model can absorb it, and most enterprises are pretending otherwise.

When the production function changes, the operating model needs to change to fit. Roles need to be redesigned. Decision rights need to be redrawn. Governance needs to be restructured. Performance metrics need to be recalibrated. Career paths need to be rebuilt. Hiring profiles need to be updated. The data infrastructure that feeds the new production function needs to be re-architected for action rather than reporting. Almost none of this is happening at the speed the technology is moving.

The result is a widening gap between what AI is capable of doing and what enterprise operating models are organisationally able to receive. You can have a perfect model deployed into a workflow whose decision rights, governance, data layer, and role design were built for a different era — and the model will produce local efficiency, then plateau, because everything around it is fighting the new pattern.

The institutions that close this gap fastest will operate AI-native production functions in 2030. The institutions that don't will operate the same legacy production function with AI bolted on the edges, indistinguishable from one another, competing on the same dimensions as before. The first group will have structural advantage. The second group will have augmentation gains and a flat cost-to-income ratio.

What an AI-native production function actually looks like

When the production function is rebuilt around AI as a native capability, the shape changes in recognisable ways:

Information flows continuously rather than in batches. The system processes inputs as they arrive, not in nightly cycles. Data is captured at the point of action, standardised at capture, and available for downstream decisions in seconds.

Decisions are generated by default. The system produces the recommended action for most cases. Humans intervene where their judgment, accountability, or context adds something the system can't.

Hand-offs become unnecessary. Many traditional workflow hand-offs exist because one team has information another team needs. When the system can route information automatically, the hand-off can be eliminated entirely.

Sequential steps become parallel. Work that used to be serialised because each step needed human attention can now run in parallel because the system isn't waiting.

Human work concentrates at exception points. The operator role shifts from "do the work" to "handle the cases the system passed up and curate the feedback that improves the system." The work is harder and more consequential, but there is less of it per unit of throughput.

Roles change shape. Team leads become system supervisors. Operators become exception handlers. New roles emerge for workflow design, feedback curation, and embedded second-line risk partnership.

The operating loop becomes continuous. Instead of quarterly reviews of how the function is performing, you have real-time monitoring of how the system is performing — with the human and the system in a continuous feedback relationship.

This is recognisably different from the augmented version of the same workflow. If you removed the AI tomorrow, this production function would not function with humans doing the same steps slower — it would have to be completely rebuilt around human operators again. That is the test for whether a production function has actually been redesigned.

Why this is harder than it looks

If the playbook is becoming clearer, the obvious question is why more enterprises aren't further along. The answer is that this work runs into the hardest parts of running a company.

Most processes exist for a reason. They've been optimised over time, shaped by constraints, embedded into how teams operate. Rewriting them is not just a technical exercise — it's an organisational one. It requires aligning multiple stakeholders, redefining roles, undoing years of incremental optimisation. Augmentation, by contrast, requires almost none of that. So most enterprises default to augmentation.

The data layer is the binding constraint. Most enterprise data is captured for reports, not for action. Rebuilding it for action is unglamorous, expensive, politically thankless, and produces benefits that are not immediately visible. So it gets delayed, deprioritised, or scoped down.

The talent shift is structural. The roles that AI-enabled organisations need don't exist in most companies yet. Workflow designers, system supervisors, exception handlers, feedback curators, embedded second-line risk partners. These have to be designed, hired or grown, and integrated into the operating model — and most enterprises haven't started.

Culture is the underestimated variable. Companies that make real progress here operate with disciplined cadence, run production experiments under clear governance and control, accept that early versions will be imperfect, and optimise for learning with full evidence and audit trail. Companies that struggle split into two camps: pilots in isolated environments that never touch real production risk (comfortable for risk-averse cultures but almost never structurally transformative), or production experimentation without the governance scaffolding to defend the work under supervisory review. Neither compounds. The firms that compound treat governance as the infrastructure that makes disciplined production learning possible.

These frictions are addressable. None of them are reasons to wait. They are reasons that the institutions which start this work earliest will compound advantage that competitors cannot close.

The startup angle

The hardest thing for incumbent leaders to hear is that AI enablement is structurally easier for startups than for them. AI-native fintechs, insurtechs, and platform companies can design workflows, data layers, decision rights, and operating models from scratch. They don't pay the organisational tax of unwinding decades of optimisation around a different production function. Their cost structures, cycle times, and data flywheels will compound faster than incumbents can match.

The implication is not that incumbents should give up. The implication is that incumbents should start the structural work earlier to compensate for the disadvantage. Every quarter spent in augmentation mode is a quarter the gap widens. Every quarter spent in serious enablement work is a quarter the gap closes. For most large institutions, the realistic frame is: we cannot and should not try to match a fintech's pace of change — we are operating under PRA, FCA, and customer obligations that a challenger does not yet carry — but we can be substantially ahead of our incumbent peers, and that is the competitive comparison that actually matters.

The institutions which start now will have a 24-month head start over their peers in two years. That head start compounds.

Where to start

Three concrete starting points if this argument lands:

1. Take the AI Enablement Maturity Diagnostic. The diagnostic scores your organisation across the five pillars of AI enablement — production function, data layer, decision systems, operating model, and governance — in 25 questions, and produces a per-pillar breakdown plus prioritised recommendations.

2. Audit one specific AI initiative against the compounding test. Pick any pilot in your portfolio and run the AI Pilot Compounding Audit — 10 questions, 90 seconds, no email required. The verdict (Kill / Rebuild / Hold / Scale) will tell you whether that specific initiative is structurally positioned to compound.

3. If you lead operations, take the AI Enablement for Operations Leaders course. Module 1 of the course goes deep on the production function shift with worked examples. The full seven modules walk through the workflow redesign framework, decision rights, the data layer, talent and operating model design, and the 12-36 month sequenced roadmap.

For a comprehensive view, see the pillar essay on what AI enablement actually means, the FS sector playbook, and our AI Enablement service.

The companies that will lead their industries in 2030 are the ones rebuilding their production functions around AI in 2026. The work is hard, the timeline is long, and the political muscle takes years to build — which is exactly why the best time to start is now.

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service

Ready to do the structural work?

Our AI Enablement engagements are built around the five pillars in this article. We start with a focused diagnostic, then redesign one priority workflow end-to-end as proof — including the data layer, decision rights, and governance machinery.

Explore the AI Enablement service
Monthly newsletter

More like this — once a month

Get the next long-form essay on AI enablement, embedded governance, and operating-model design straight to your inbox. One considered piece per month, written for senior practitioners in regulated industries.

No spam. Unsubscribe anytime. Read by senior practitioners across FS, healthcare, energy, and the public sector.