Beyond AI-Centric: Why orchestration is the real shift

Beyond the ‘AI-Centric Imperative’: Why the Next Frontier Is Orchestration

From AI-Centric to Orchestration-Native | PART 1 / 8


If you work in enterprise software or supply chain, you’ve probably read some version of the same message by now:

AI agents are coming. Products, pricing, go-to-market, operations, infrastructure, and talent will all have to adapt.

McKinsey’s recent “AI-centric imperative” paper captures that moment well. It argues that intelligent agents will reshape how software is built and sold, how work is done, and how value is captured. It also surfaces the uncomfortable side of the story: rising inference costs, governance and risk challenges, and a widening gap between pilot experiments and production reality.

At INDUSTRY 5, we agree with the urgency – but we think “AI-centric” is only a half-step.

The real destination is what we call an orchestration-native enterprise: a system where coordinated agents continuously simulate, negotiate, and execute decisions across your demand, supply, production, logistics, and procurement networks. Intelligence doesn’t just live inside individual tools; it lives between them, as a fabric that keeps the whole system in balance.

This first article introduces that idea in plain language. The rest of the series will unpack how to get there safely.


AI inside apps vs intelligence between apps

Most AI conversations still sound like feature roadmaps:

  • “We’re adding a copilot to our planning suite.”
  • “We have a smart assistant in the TMS now.”
  • “Our ERP can generate next-best actions.”

These are useful advances. They make today’s tools easier to use and faster to navigate. McKinsey groups many of them under the banner of “AI-centric” products: applications that embed intelligence directly in workflows and interfaces.

But if each tool gets “smarter” in isolation, something else gets harder: coordination.

  • Sales has its AI.
  • Supply has its AI.
  • Logistics has its AI.
  • Finance has its AI.

Who makes sure they don’t work at cross-purposes?

An orchestration-native system starts from that question. Instead of just making every app more intelligent, it introduces an intelligent coordination layer that sits across them all and answers a different set of questions:

  • What is the best way to align all of these decisions across time, geography, and constraint?
  • How do we see the ripple effects of a choice before we commit to it?
  • How do we keep humans in control without asking them to manually reconcile everything?

In other words: not another application, but a decision fabric.


A simple idea: every decision is a flow

Under the hood, the orchestration-native model is built on a very simple insight:

Every operational decision can be expressed as a flow of something from somewhere to somewhere else in a certain window of time.

At i5, we codify that as i5 Transactional Grammar:

Product : Quantity : Place : Time

For example:

“500 units of Component A from Plant X to Warehouse Y between May 5 and May 10.”

That single sentence contains everything an intelligent system needs to coordinate work:

  • Product – what is moving?
  • Quantity – how much?
  • Place – from where to where?
  • Time – within which window?

When every agent – whether it represents production, inventory, transport, or procurement – speaks this same grammar, coordination stops being a integration problem and becomes a reasoning problem. Agents can read the same flows, propose adjustments, and settle on a plan without brittle custom mappings between tools.

This is the first step toward orchestration-native.


Time as windows, not timestamps

The second step is to treat time the way the real world does: not as a single static date field, but as a set of possibilities.

A shipment is rarely just “due on June 10.” In reality it’s:

  • cannot leave before June 5 (no sooner than),
  • must arrive by June 10 (no later than),
  • has a set of conditions under which exceptions are allowed.

i5’s Temporal Logic models time as nested windows like these, not just fixed dates. Every change – supplier delay, port closure, demand spike – ripples through these windows. Agents can then recalculate what is still possible before anything breaks.

That alone is a big shift from traditional systems, which mostly record what has already happened. Orchestration-native systems continuously calculate what can happen next and what should happen next under changing constraints.


A living graph of supply, demand, and movement

Now add the third piece: a Dynamic Negotiation Graph.

Imagine your entire network – suppliers, plants, DCs, carriers, customers – as a living marketplace where:

  • Demand agents publish what they need (again, as Product : Quantity : Place : Time).
  • Supply agents publish what they can offer.
  • Movement agents publish how capacity can connect the two.

This negotiation doesn’t happen in emails or spreadsheets. It happens inside the orchestration layer itself. Agents propose options, compare cost, lead time, and carbon impact, then commit via Smart Agreements – digital commitments that can flex within clearly defined limits while keeping a full audit trail.

From the outside, you don’t see any of this math. What you see is:

  • fewer “fire drills,”
  • fewer expediting surprises,
  • and a system that offers options instead of problems.

Simulation before integration

There’s still a practical concern that McKinsey calls out clearly: the economics and risk of AI at scale. AI inference is expensive; experimentation on live data can be risky and politically sensitive.

The orchestration-native answer is to simulate first, integrate later.

At i5, that’s what our synthetic data engine, i5-SDG, is for: it generates realistic, end-to-end supply chain scenarios with no real company data at all. Teams can watch agents negotiate across demand, supply, production, and transport in a completely safe sandbox – proving value and tuning behavior before touching production.

Only after those simulations demonstrate value does the orchestration layer connect to live systems in “mirror mode,” observing flows through ERP, WMS, TMS, and CRM without changing anything. From there, it can gradually begin to drive decisions within clearly defined policies.

This is how you move beyond AI-centric safely.


From AI-centric to orchestration-native: a shift in questions

“AI-centric” thinking asks:

  • Where can we add intelligence to our existing tools?
  • How do we improve productivity in this team, this function, this workflow?

“Orchestration-native” thinking asks:

  • How do all of these decisions interact across our network?
  • What happens if we treat the enterprise as one living system, not a stack of apps?
  • How do we see, simulate, and steer that system in real time?

The building blocks are already here: a shared grammar for decisions, a richer model of time, and a negotiation graph that agents can use to work together. The difference is that, instead of scattering AI inside every tool, we give it a place to coordinate.

In the next article, we’ll go deeper into what an orchestration-native system actually looks like – without the jargon. We’ll walk through how it sits on top of your existing stack, how agents collaborate, and what it feels like for your teams to work in a world where the system is not just smart, but in tune.


NEXT: What an Orchestration-Native System Actually Looks Like (in Plain Language)