Pieter Van Schalkwyk
CEO at XMPRO
My recent article on The Disaggregation of Labor argued that technology does not eliminate work. It disaggregates integrated labor into specialized functions distributed across an expanding value chain. The farmer's journey from ox to orchestrator reveals a pattern that repeats through every major technological transition, including the current one.
The response to that article surfaced a practical question I want to address directly: "Fine, disaggregation creates new roles over time. But what do I do now with my existing team?"
The context behind this question matters. Industrial organizations are not trying to cut headcount. They cannot find enough skilled people to begin with. Experienced operators are retiring faster than replacements can be trained. The talent pipeline has thinned. Institutional knowledge walks out the door every month.
The question executives actually ask is not "How many people can we replace with AI?" It is "How do we do more with the people we have, because we cannot find the skilled workers we need?"
This reframes AI from a replacement tool to a multiplication tool. The goal is not fewer people. The goal is more capability from a constrained workforce.
The Economics of Cheaper Decisions
In 1865, William Stanley Jevons documented a paradox in English coal consumption. As steam engines grew more efficient, using less coal per unit of output, total coal consumption increased. Efficiency made steam power viable for applications that were previously uneconomical. Demand expanded faster than efficiency reduced consumption per use.
The same dynamic is playing out in industrial decision-making right now.
Every industrial operation is constrained by decision capacity. Not computational capacity. Not data capacity. Decision capacity: the ability to analyze a situation, weigh options, and commit to action.
Traditional operations allocate this scarce decision capacity carefully. Critical assets get predictive maintenance programs. Secondary assets run to failure. Process optimization happens quarterly because continuous optimization costs too much in engineering time. Quality investigations focus on major deviations because investigating every anomaly would overwhelm the team.
AI agents collapse the cost of operational decisions. Analysis that required hours happens in seconds. Pattern recognition that required specialized expertise becomes available to frontline operators. Monitoring that required dedicated attention becomes continuous and automatic.
When decision costs collapse, decision volume explodes. This is Jevons Paradox applied to industrial cognition.
The organization that previously analyzed 50 critical assets now analyzes 5,000. The process optimization that happened quarterly now happens daily. The quality investigations that sampled major deviations now examine every anomaly.
None of this reduces headcount. All of it increases the demand for people who can act on what the AI surfaces.
The Multiplication Mechanism
Multiplication is not a metaphor. It describes a specific operational dynamic.
Consider a reliability engineer in a traditional model. Their value is constrained by time: hours available for analysis, assets they can physically inspect, reports they can compile. If they work harder or smarter, they might achieve incremental improvement. But the ceiling is fixed by human cognitive and temporal limits.
Now give that engineer an agent team. The agents monitor vibration signatures across every rotating asset in the facility. They correlate temperature trends with failure mode libraries. They flag anomalies ranked by predicted severity and time to failure.
The engineer's constraint shifts. They are no longer limited by how many assets they can analyze. They are limited by how many agent-surfaced situations they can resolve. Their judgment now applies across a scope that was previously impossible.
This is multiplication: the same human capability, applied across dramatically expanded scope through agent leverage.
The multiplication creates new work in three categories:
Resolution work. Agents surface situations requiring human judgment. More agents monitoring more processes means more situations surfaced. Someone must resolve them. The volume of resolution work scales with agent deployment.
Calibration work. Agents operate on learned models and configured thresholds. Local conditions vary. Equipment ages. Process parameters shift. Someone must tune agent behavior to maintain accuracy. This calibration work did not exist before agents existed.
Exploitation work. Agents identify optimization opportunities that were invisible before. Capturing that value requires humans to validate recommendations, coordinate implementation, and verify results. More opportunities identified means more exploitation work available.
Organizations that deploy AI without staffing these categories will leave value on the table. They will deploy agents that surface opportunities no one captures, flag exceptions no one resolves, and drift out of calibration because no one is tuning them.
The automation mindset focuses on tasks handled. The multiplication mindset focuses on value captured.
Where the Humans Go
If AI handles monitoring, analysis, and routine diagnostics, what do the humans actually do?
The work shifts from execution to orchestration. Humans stop completing tasks and start applying judgment across systems.
Multiplied workforces concentrate in four areas:
- Exception handling. Every well-designed agent system includes confidence thresholds. When situations exceed those thresholds, agents escalate. Humans resolve the cases that agents cannot. This is not residual work that will eventually be automated. It is the highest-value work because it requires judgment that agents lack.
- Context translation. Agents pattern-match against training data. Humans understand when current conditions differ from historical patterns in ways that matter. A process deviation that looks routine to an agent might signal equipment degradation that an experienced operator recognizes. This translation between agent analysis and operational reality requires human expertise.
- Teaching and refinement. Agents improve through feedback. Humans provide that feedback by validating recommendations, correcting errors, and explaining why certain situations require different responses. The more agents deployed, the more teaching work required to maintain and improve their performance.
- Value capture coordination. Agents identify opportunities. Capturing value from those opportunities requires cross-functional coordination: maintenance windows aligned with production schedules, quality improvements validated against customer requirements, process changes assessed for safety implications. Humans coordinate this complexity.
None of these categories shrink as AI capability grows. All of them expand.
The Mindset That Multiplies
Some workers adapt to multiplication faster than others. The difference is not technical skill. It is operational mindset.
Workers who multiply effectively share certain habits:
- They treat agent outputs as starting points. An agent recommendation is the beginning of a decision process, not the end. Workers who expect agents to deliver final answers will be frustrated. Workers who expect useful first drafts will compound their productivity through rapid refinement.
- They calibrate trust appropriately. Blind trust in agent outputs creates risk. Excessive skepticism wastes agent value. Effective workers develop intuition for where agents are reliable and where verification matters. They verify high-stakes recommendations and accept low-stakes ones without friction.
- They define their value through judgment, not tasks. Some workers anchor their identity to specific tasks. When agents handle those tasks, they feel displaced. Other workers anchor their identity to outcomes and judgment. They welcome agents that free them for higher-leverage work. The second group multiplies. The first group resists.
- They iterate rather than perfect. Agent collaboration is conversational. You refine outputs through interaction, not through crafting a single perfect prompt. Workers comfortable with rapid iteration adapt quickly. Workers who need to get it right the first time struggle with the interaction pattern.
These habits can be developed. They are not fixed personality traits. But they require organizational context that supports them: leadership that models iterative AI use, permission to experiment, and reward systems that value judgment over task completion.
The cultural challenge is real. Most industrial organizations were built around task execution, not judgment application. The structures, metrics, and career paths all assume integrated labor where individual workers complete defined tasks. Multiplication requires different structures where workers orchestrate agent teams toward outcomes.
Starting the Multiplication
If you accept the multiplication premise, the implementation path changes at every step.
Audit for leverage, not reduction. Examine current work and ask: where would ten times more decision capacity create business value? The reliability program limited to critical assets. The optimization cycle constrained to quarterly reviews. The quality process that samples rather than investigates comprehensively. These constraints represent multiplication opportunities.
Deploy agents to expand scope, not reduce headcount. When agents take over monitoring or analysis tasks, redeploy the humans to resolution, calibration, and value capture work. Expect the total volume of work to increase as agents surface opportunities that were previously invisible.
Measure value creation, not cost avoidance. Track operational outcomes: maintenance decisions executed, optimization opportunities captured, quality improvements implemented, exceptions resolved. These metrics reveal multiplication. Headcount metrics obscure it.
Develop orchestration capabilities internally. Multiplication requires people skilled in exception handling, context translation, and agent calibration. You probably cannot hire these people. Develop them from your existing workforce. The experienced operators who understand your processes are your best candidates for orchestration roles.
Build the cultural foundation. Model iterative AI collaboration at the leadership level. Create permission for experimentation. Shift reward systems toward outcomes and judgment rather than task completion. The technical deployment will fail without the cultural readiness.
The Competitive Reality
The workforce constraint is not temporary. Retirements will continue. The talent pipeline will remain thin. Skilled workers will stay scarce.
Organizations that treat AI as task automation will achieve modest efficiency gains. They will handle the same workload with slightly fewer people and call it success.
Organizations that treat AI as multiplication will do something more valuable: they will codify decision logic into organizational IP.
When an experienced operator works with an agent team, their judgment gets captured. The reasoning behind exception handling, the pattern recognition that took decades to develop, the contextual knowledge that informs good decisions: all of this becomes embedded in agent configurations, decision traces, and operational memory. The knowledge stops living exclusively in human heads.
This creates two advantages that compound over time.
First, the codified decision logic scales on demand. An experienced operator can only be in one place. Their judgment, once captured in agent systems, can operate across every asset in the facility simultaneously. The multiplication effect applies to institutional knowledge, not just individual productivity.
Second, the knowledge stays when people leave. Every retiring expert who works with agent systems transfers part of their expertise into the organization's operational memory. The IP does not walk out the door.
But this only works if the knowledge is captured in a form the organization controls. I wrote previously about parametric agents versus coded agents. The distinction matters here. When operational knowledge gets embedded in code written by developers, it becomes fragile, hard to update, and difficult to govern. When it gets captured in parameters that domain experts can configure directly, it becomes organizational IP that compounds rather than depreciates.
The organizations that figure this out will capture their experts' decision intelligence before retirement, embed it in systems that scale, and multiply operational capacity while competitors struggle to backfill roles they cannot staff.
They will have the answer to "How do we do more with the people we have, because we cannot find the skilled workers we need?"
Pieter van Schalkwyk is the CEO of XMPro, specializing in industrial AI agent orchestration and governance. XMPro MAGS with APEX provides cognitive architecture and DecisionGraph capabilities for agent networks operating on existing industrial systems.
Our GitHub Repo has more technical information. You can also contact myself or Gavin Green for more information.
Read more on MAGS at The Digital Engineer
