Pieter Van Schalkwyk
CEO at XMPRO
Two seemingly contradictory perspectives on AI transformation landed this week. Understanding how they fit together unlocks something neither captures alone.
David Shapiro argues in his Substack piece that we're building "mechanical horses" when we should be building cars. We're forcing AI into human-shaped workflows rather than rebuilding workflows to suit AI capabilities. The automobile required asphalt roads, not dirt paths. Similarly, AI transformation requires new operational infrastructure.
Geoffrey Moore , writing today on LinkedIn, offers what appears to be the opposite advice. He argues that AI overlays onto existing systems infrastructure. Don't rewrite foundational systems. Systems of record and systems of engagement are mission-critical context. There is no prize for doing them wonderfully, but severe penalties for making a mistake.
So which is it? Rebuild everything around AI capabilities? Or preserve existing infrastructure and layer AI on top?
The answer is both. They're talking about different layers. And understanding this distinction is precisely what industrial operations leaders need to navigate AI transformation without either missing the opportunity or destroying what works.
Moore's Insight: The Systems Layer
Moore's framework distinguishes three system types that have accumulated over decades of enterprise IT investment:
- Systems of record automate the back end of transaction workflows, massively improving supply chain productivity
- Systems of engagement reach out to the front end, making a dent in delivery chain productivity
- Systems of intelligence (where generative and agentic AI live) overlay onto both, removing friction we previously assumed was intractable
His key insight: AI adds another layer to the stack. It does not replace the foundational layers.
This matters because systems of record and engagement are mission-critical context. They don't differentiate your operations, but they must not fail. Rewriting them carries severe penalty for mistakes with no prize for doing them wonderfully.
For industrial operations, this translates directly. Your SCADA systems, operational historians, alarm management frameworks, and safety instrumented systems represent decades of hardened infrastructure. They work. They're integrated. They're understood by your operations and IT teams.
Moore is right. Don't tear them out.
The Missing Layer: Semantic Understanding
But Moore's framework has a gap that matters enormously for industrial operations. He assumes systems of intelligence can simply overlay onto systems of record and engagement. For enterprise transactions, this works. The data in CRM and ERP systems already carries semantic meaning. A purchase order is a purchase order. A customer record is a customer record.
Industrial operational data is different. A SCADA system records that tag PI-2047 read 347.2 at 14:32:07. That's transactional data. But what does it mean? Is 347.2 normal? Concerning? Critical? The answer depends on what PI-2047 measures, what equipment it's attached to, what operational mode the unit is in, what happened in the previous shift, and what the downstream implications might be.
Human operators spend years building this semantic understanding. They learn that "when PI-2047 trends above 340 while FIC-2103 is in manual and the ambient temperature exceeds 95°F, watch for cavitation in P-2047A." This contextual knowledge transforms raw transactional data into operational meaning.
For Agentic Operations (the capability XMPro enables), we need to add a semantic layer to Moore's stack. Agent networks cannot reason about operational situations if they only have access to raw tag values. They need data infrastructure that provides:
- Contextual relationships between data points and physical equipment
- Operational meaning that connects values to conditions and consequences
- Historical patterns that inform what "normal" looks like across different operating modes
- Causal understanding of how variables interact and influence outcomes
This semantic layer is part of the "asphalt" that Shapiro's framework demands. Raw data access is the dirt path. Semantic understanding is the paved road that enables agent networks to reason at speed.
XMPro Agentic Operations Platform with DataStreams provides this semantic infrastructure. They don't replace SCADA systems or historians. They add the contextual layer that transforms transactional data into operational intelligence that agents can actually use. The foundational systems remain stable. The semantic layer makes them meaningful to agents.
Shapiro's Insight: The Operational Model Layer
But Shapiro is addressing something different. He's not talking about technical infrastructure. He's talking about how work gets organized around that infrastructure.
When automobiles arrived, people didn't need mechanical horses that walked on four legs. They needed cars. But cars required asphalt roads, not dirt paths. The disruption wasn't the vehicle alone. It was the vehicle plus the infrastructure reorganized around vehicle capabilities.
Shapiro identifies the "Mechanical Horse Fallacy" in current AI implementations. We're forcing AI into human-shaped workflows ("jobs") rather than rebuilding the workflows to suit AI. We expect "drop-in remote workers" that can be onboarded and culturally assimilated like humans. When AI can't seamlessly replace a human employee in one-to-one fashion, we conclude the technology has stalled.
This view misses the forest for the trees. As Shapiro points out, "A job is essentially a bundle of tasks, context, and responsibilities aggregated for a single human. AI does not replace jobs. It unbundles tasks."
The transformation requires breaking down these bundles and determining which value streams can be addressed by agent networks versus human judgment. This is an operational model problem, not a technical infrastructure problem.
The Synthesis: Keep Your Foundations, Rebuild Your Operations
Here's what neither Moore nor Shapiro explicitly states, but what industrial operations leaders desperately need to understand:
You can keep your technical infrastructure stable while completely reimagining your operational model.
The "asphalt" required for industrial AI isn't about replacing SCADA systems and historians. It's about:
- Semantic layers that transform raw transactional data into operational meaning agents can reason about
- Governance frameworks that operate at machine speed rather than quarterly audit cycles
- Organizational structures where humans orchestrate agent networks rather than execute work themselves
- Workflow designs unbundled around human and agent strengths rather than traditional job definitions
- Objective functions that guide agent behavior rather than procedures that constrain human execution
This resolves the real fear executives carry into AI transformation discussions. "Do I need to rip and replace my operational technology stack?" No. "Can I achieve transformation by simply adding copilots to existing workflows?" Also no.
The mechanical operator fallacy is about operational models, not technical systems.
The Mechanical Operator in Industrial Clothing
Walk through most AI demonstrations for industrial operations and you'll see Moore's systems of intelligence being applied to Shapiro's mechanical horse problem.
An AI copilot assists human operators. A chatbot answers questions about equipment. A system monitors the same dashboards humans watch, generating alerts at roughly the same pace humans would escalate issues. The AI layers onto existing infrastructure exactly as Moore describes. But it preserves human-shaped workflows exactly as Shapiro warns against.
This is the mechanical operator. We've taken the form of human work and attempted to replicate it with artificial intelligence. The AI sits in a virtual control room. It watches the same screens. It follows the same sequential decision patterns. It operates within the same time horizons as the humans it augments.
The results are predictable: marginal productivity improvements, enthusiastic pilot programs, and disappointing scale-up economics. The technology layers cleanly onto existing systems. But the operational model remains constrained by human-shaped assumptions.
What Unbundling Actually Looks Like
Consider what a reliability engineer role actually contains. It bundles pattern recognition across equipment data, documentation of findings, coordination with maintenance and production, analysis of root causes, exception handling for novel situations, and knowledge transfer to less experienced team members.
Each element has different characteristics:
- Pattern recognition across hundreds of parameters is something agents can do continuously at scales humans cannot match
- Documentation and knowledge capture can happen automatically as agents work
- Coordination between maintenance, production, and quality can occur at machine speed without meetings
- Exception handling for truly novel situations requires human cognition that current AI cannot replicate
- Strategic trade-offs between competing objectives need human judgment about organizational priorities
The mechanical operator approach treats this bundle as indivisible. It asks: "Can AI assist the reliability engineer?" The answer leads to copilot implementations that augment rather than transform.
The unbundling approach asks different questions. Which value streams can agent networks address more effectively? Which genuinely require human judgment? How do we reorganize operations around this distinction while preserving the technical infrastructure that already works?
From Execution to Orchestration
This unbundling points toward a fundamental shift. In traditional operations, humans execute work and coordinate with each other. In agentic operations, agents execute work while humans orchestrate agent networks.
Moore describes this shift in transaction processing terms. Generative AI "eats complexity for lunch" by engaging with fuzzy front-end interactions. Agentic AI "eats latency for lunch" by chaining deterministic workflow steps faster than humans navigating inboxes and approval chains.
For industrial operations, the implications run deeper. Agent networks don't just accelerate existing work patterns. They enable operational models that human execution could never achieve.
Traditional operations require three to four operators per shift, plus engineers on call. That's 12 to 16 skilled people for 24/7 coverage of a single unit. This structure exists because humans coordinate through formal channels, process information sequentially, and require rest between shifts.
Agent networks operating on existing SCADA and historian infrastructure can coordinate continuously at machine speed. They process thousands of parameters simultaneously. They don't require shift handoffs or coordination meetings.
The human reliability team (now perhaps two to three people instead of a dozen) doesn't execute the pattern recognition, documentation, or routine coordination. They define objective functions guiding agent behavior. They monitor performance against these objectives. They adjust agent parameters when performance drifts. They handle genuinely novel situations exceeding agent capabilities.
This is the automobile, not the mechanical horse. The technical infrastructure (Moore's systems of record and engagement) remains stable. The operational model (Shapiro's workflow organization) transforms completely.
The Governance Infrastructure Gap
Moore notes that AI overlays "remove sources of friction that we had previously assumed were intractable." For industrial operations, the most intractable friction isn't technical integration. It's governance at machine speed.
Traditional governance operates through periodic review: quarterly audits, weekly compliance checks, monthly performance assessments. This cadence assumes human execution speed. It accepts that governance lags behind operations.
Agent networks operating continuously at machine speed cannot wait for quarterly audits. The gap between action and review creates unacceptable risk. Organizations attempting to run agent networks on traditional governance frameworks face an impossible choice. Either slow agents to human speed (eliminating the transformation benefit) or accept governance gaps that security and compliance teams rightly reject.
This is the "asphalt" that industrial AI requires. Not new SCADA systems. Not rewritten historians. Governance infrastructure designed for machine-speed operations.
The Deontic Framework we've implemented in XMPro MAGS provides embedded behavioral constraints:
- Obligations that agents must fulfill
- Prohibitions that agents cannot transgress
- Permissions defining allowed actions
- Conditional duties that depend on operational context
These aren't post-hoc audits. They're structural constraints shaping agent reasoning before actions occur. The agents operate on existing operational technology infrastructure. But the governance layer enables operational models that human-paced oversight could never support.
Building Roads, Not Rewriting Foundations
Industrial operations actually have an advantage Moore doesn't mention. Decades of SCADA systems, operational historians, alarm management frameworks, and safety instrumented systems have built mission-critical infrastructure that works. This foundation can be leveraged rather than replaced.
XMPro MAGS agents access operational data through XMPro DataStreams, the same integration layer already connecting 200+ enterprise and industrial systems in production environments. If a DataStream already connects to a system, MAGS agents can immediately interact with it. No custom integration work. No protocol translation. No new security vulnerabilities to manage.
This is Moore's insight applied specifically to industrial operations. The systems of record remain stable. The systems of engagement remain stable. The systems of intelligence layer on top, accessing existing infrastructure through proven integration.
But the operational model transforms completely. Agent networks coordinate at machine speed. Humans shift from execution to orchestration. Governance embeds into agent architecture rather than lagging behind in periodic reviews.
Keep your foundations. Rebuild your operations.
The Path Forward
The path forward requires honest assessment of what you're actually building. Ask yourself:
- Are you layering AI onto existing infrastructure (Moore) while preserving human-shaped workflows (the mechanical operator fallacy)?
- Are you attempting to rebuild technical infrastructure when operational model transformation would deliver the value?
- Are you measuring success against human productivity baselines, or against outcomes that were previously impossible?
- Have you built governance infrastructure for machine-speed operations, or are you constraining agent capabilities to fit human-paced oversight?
The gap between AI capability and economic impact will close. But it won't close through better mechanical operators layered onto stable infrastructure. It will close when organizations stop forcing AI into human-shaped workflows and start rebuilding operational models around what agent networks can actually do.
Moore is right: don't rewrite your foundational systems. Shapiro is right: stop building mechanical horses.
The synthesis: Build the semantic, governance, and operational infrastructure that enables fundamentally different operations on top of the technical infrastructure you already have.
The technology is ready. The foundational systems are ready. The question is whether your organization will build the roads.
Pieter van Schalkwyk is the CEO of XMPro, specializing in industrial AI agent orchestration and governance. XMPro MAGS with APEX provides the cognitive architecture and governance infrastructure for agent networks operating on existing industrial systems.
Our GitHub Repo has more technical information if you are interested. You can also contact myself or Gavin Green for more information.
Read more on MAGS at The Digital Engineer
