Module 1 of 714% through
Module 1

The Production Function Is Changing

Why AI is the first technological shift in decades that genuinely changes how companies produce output — and what that means for your operating model.

Module 1 — 90-second video overview

Why this module matters

If you take only one idea from this course, take this one: AI is the first technological shift in decades that changes the production function itself. Not the tools inside it. Not the speed at which it runs. The production function — the system that converts your inputs into your outputs.

Almost every mistake we see organisations make with AI traces back to missing this distinction. They treat AI like cloud, or like mobile, or like the early internet — as a faster, cheaper, better way to do what they already do. And so they end up with a chatbot here, a copilot there, a few promising pilots — and an operating model that is fundamentally unchanged. They then conclude, often within 18 months, that AI is "more incremental than people thought." This is wrong. The technology is fine. The implementation is bolting it onto a system that was never designed for it.

This module gives you the framing you'll use for the rest of the course.

Every company is a production function

Every company, no matter what it does, is a system that converts inputs into outputs.

  • Inputs: information, labour, capital, raw materials, customer signals, regulatory requirements.
  • Outputs: decisions, transactions, products, services, customer experiences, risk positions.

This is a deliberately simple framing because it forces you to look past the org chart, past the technology stack, past the brand — and ask: what does the actual conversion machine look like? Where does information enter? Where does it become a decision? Where does that decision become an action? Who does what, and why?

For the better part of the last several decades, this conversion machine has been human-centric. Software has played a supporting role: it stored data, moved data around, automated narrowly-defined tasks. The actual interpretation, judgment, and execution — the load-bearing parts of the production function — sat with people. Even the major technological shifts of the last 30 years preserved this. The internet expanded access. Mobile expanded reach. Cloud expanded scale. None of them changed the underlying loop: humans at the centre, software as enabler.

AI is the first thing that begins to break this pattern.

What AI does that previous tech didn't

The reason AI is different is not that it is "smarter" than previous software. It is that it does three things that previous software fundamentally did not.

It generates information, rather than just storing or retrieving it. A model can produce a draft, a summary, a recommendation, an analysis — new artefacts, on demand, that didn't exist before the model produced them. Previous software couldn't do this. Spreadsheets calculate; databases retrieve; search engines find. None of them generate.

It proposes actions, rather than waiting for instructions. An AI system can look at incoming data and decide that something should happen — a case should be escalated, a customer should be reached, a parameter should be adjusted. That's a fundamental change from "the user clicks a button" to "the system knows what to do next."

It participates in decisions, rather than just supporting them. Risk scoring, lead prioritisation, customer routing, fraud flagging, regulatory filtering — these are increasingly model-driven decisions, not human-judgement-with-supporting-data. The decision moves from the human into the system, with the human moving toward an oversight or exception-handling role.

These three shifts — generation, proposal, participation — are why AI is structurally different. They change who or what sits at the centre of the production function. And once you change that, everything around it has to change too.

The augmentation trap

When organisations first encounter AI, they almost always start in augmentation mode.

A support agent gets suggested replies. A salesperson gets AI-prepared call notes. An engineer gets a coding copilot. A compliance officer gets pre-screened alerts. These are real gains. They are also misleading, because they make the change feel incremental — and they let the rest of the organisation continue to operate as if nothing structural has happened.

The trouble with augmentation is that it doesn't compound. The 30% efficiency gain on the augmented task is real, but it stays local. It doesn't change the workflow that the task sits inside. It doesn't change the data the workflow depends on. It doesn't change the decisions that depend on the workflow's outputs. The system stays the same shape; one of its components got faster.

This is the trap most enterprise AI initiatives are stuck in today. They are full of useful pilots and modest efficiency gains, and they will continue to be full of useful pilots and modest efficiency gains for as long as the underlying production function remains untouched.

What changes when the production function changes

Now consider the alternative. Imagine that instead of adding AI to the existing workflow, you redesign the workflow on the assumption that AI is a native capability. Information arrives. The system processes it continuously. Outputs are generated by default. Humans intervene where their judgment, accountability, or context adds something the system can't.

In this version of the world:

  • Sequential steps become parallel, because the system isn't waiting for humans to advance the work.
  • Hand-offs disappear, because the data the next step needs is already where it needs to be.
  • Routine decisions become defaults, generated by the system, with human review compressed to a fast approval gesture.
  • Human attention concentrates at exception points and judgment-bound decisions, where it actually adds value.
  • Roles change shape. The "operator" job becomes something more like "exception handler and feedback curator." The "manager" job becomes something more like "system supervisor."

This is what we mean by an AI-native production function. It is not faster human work. It is different work, done by a different combination of humans and systems.

Why this is harder than it sounds

If the playbook is so clear, why isn't every enterprise already doing this?

The honest answer is that this work runs into the hardest parts of running a company. It requires rewriting processes that exist for real reasons. It requires aligning multiple stakeholders. It requires undoing years of incremental optimisation. It requires changing roles, decision rights, performance metrics, and career paths. It requires investing in data infrastructure that nobody outside the immediate team will see for months. And it requires a level of executive sponsorship that most organisations do not have.

Augmentation, by contrast, requires almost none of that. You buy a tool, you give it to a team, you measure the local gain, you move on. So most organisations are doing augmentation. And most organisations are wondering why their AI initiatives are not adding up to anything structural.

This is the gap between the companies that will compound advantage over the next decade and the ones that won't. The constraint is not capability of the models. The constraint is whether the organisation is willing to redesign its production function around them.

The rest of this course is about how to do that work — concretely, in financial services and other regulated environments — without losing your stakeholders along the way.

What's next

In Module 2, we will get specific about what an AI-native workflow looks like in practice — using real banking and insurance examples — and you will learn the diagnostic question we use to tell the difference between augmentation and genuine redesign in any workflow.

Module Quiz

5 questions — Pass mark: 60%

Q1.What is meant by a company's 'production function' in the context of AI enablement?

Q2.Why did previous technology shifts (cloud, mobile, internet) NOT fundamentally change the production function?

Q3.What is the most common failure mode of enterprise AI today?

Q4.In an AI-native operating model, what happens to the role of humans?

Q5.Why is it harder for incumbents to adopt AI enablement than for AI-native challengers?