fbpx
SYSTEM: OPERATIONALOT/IT CONNECTORS: 150+AUTONOMOUS OPERATION: 15+ DAYSGOVERNED AUTONOMY: ENFORCEDAUDIT TRAIL: IMMUTABLEINDUSTRIES: MINING · OIL & GAS · ENERGYDEPLOYMENT: 3-6 MONTHS VIA APEXCONTROL LOOPS: 3,400+ SYSTEM: OPERATIONALOT/IT CONNECTORS: 150+AUTONOMOUS OPERATION: 15+ DAYSGOVERNED AUTONOMY: ENFORCEDAUDIT TRAIL: IMMUTABLEINDUSTRIES: MINING · OIL & GAS · ENERGYDEPLOYMENT: 3-6 MONTHS VIA APEXCONTROL LOOPS: 3,400+

Search Blog Articles & Latest News

Blog Archive Resource Library

Get practical insights on AI, Agentic Systems & Digital Twins for industrial operations

Join The Newsletter

Beyond OpenClaw: Why Industrial Agents Need Bounded Actuation

Digital twin

Pieter Van Schalkwyk

CEO at XMPRO

This article originally appeared on XMPro CEO's Linkedin Blog, The Digital Engineer

I've worked on autonomous operations for years, back when agents weren't sexy or interesting. People asked why I thought this would happen. Then OpenClaw appeared and moved from weekend project to 145K GitHub stars to OpenAI acquisition in weeks, not years. The shift from LLM-as-API to agent runtime was suddenly real.

Visionaries and early adopters love to experiment with emerging capabilities. OpenClaw proved they exist in large numbers. But making it go from interesting to useful is the real challenge, and that's the problem we continuously try to address in our work.

We built MAGS to be the cognitive runtime. OpenClaw showed us what happens when you add actuation without guardrails.

MAGS: Building the Brain

We have built MAGS to be a cognitive agent runtime system that predominantly functions as the brain. We obsess about how we can make this trustworthy for industrial applications. That means ORPA cognitive cycles where agents observe, reflect, plan, and act. It means separation of control where thinking doesn't equal doing. It means bounded autonomy with explicit limits, and multi-agent consensus for critical decisions.

While others were building chatbots, we were building decision-making systems that could be trusted to run plants.

Industrial applications are safety-critical by design. Wrong decisions cause equipment damage, environmental releases, safety incidents. You can't iterate through failures in production. Trust must be structurally guaranteed, not probabilistically predicted. That's why MAGS implements deontic logic for obligations, prohibitions, and permissions. That's why we built separation of control where agents recommend but don't execute. That's why we require multi-agent consensus before critical decisions.

This architecture reflects what we've learned from years of working with mining operations, oil and gas facilities, utilities, and manufacturing plants. The domain teaches you quickly that "good enough most of the time" is not good enough at all.

A brain can reason and recommend. OpenClaw showed us what happens when you give agents arms and legs to execute.

The OpenClaw Pattern: Actuation Without Guardrails

We have now seen that OpenClaw has moved into the mainstream of actuation, giving cognitive agents arms and legs to execute. This is both valuable and risky.

The UX breakthrough is real. Users want agents that do things, not agents that endlessly discuss things. OpenClaw demonstrated execution-first design with persistent memory across sessions, real integration with filesystem and terminal, and an agent runtime pattern with pluggable models. The market validated this immediately. The speed from concept to acquisition proved the demand.

But MoltBook proved what happens without validation layers. 1.5M API keys exposed through prompt injection. Attack success rate of 66.8% in security research testing. No validation layers between decision and execution. "Vibe coding" works for experiments but fails catastrophically in production.

Unbounded actuation in personal computing is risky. You might lose some data or expose some credentials. Unbounded actuation in industrial operations is catastrophic. You might rupture pipelines, damage million-dollar equipment, cause environmental releases, or create safety incidents.

Safety-critical systems cannot tolerate "try it and see what happens." Production requires governance that cannot be overridden by clever prompts or user requests.

We've introduced bounded autonomy for decision-making. Now we need bounded actuation for execution.

Bounded Actuation: Extending the Principle

Like we've introduced the concept of bounded autonomy, I think that needs to be extended to bounded actuation.

If we constrain what cognitive agents can decide, we must also constrain what execution agents can do. The principle extends naturally from cognitive to physical layers. Bounded autonomy limits what agents can decide. Bounded actuation limits what agents can execute. Both are required for industrial safety.

Bounded actuation means explicit capabilities instead of emergent behaviors. It means multi-layer validation instead of single-point trust. It means separation between the agent that recommends and the agent that executes. It means audit trails for every action and rollback capability when things go wrong.

The architecture requires five validation layers between cognitive agent and physical system

  • First layer: Agent constraints. Deontic logic in MAGS defines what each agent is obligated to do, prohibited from doing, and permitted to do. These constraints are built into the agent design, not enforced through prompts. This means an agent can't be "convinced" through clever prompting to exceed its authority.
  • Second layer: MAGS consensus. Multi-agent agreement before recommendations reach the actuation layer. No single agent can push through a decision that other agents consider risky or incorrect. This catches errors that individual agents might miss and prevents single points of failure.
  • Third layer: Actuation agent capabilities. The execution agent has explicit, enumerated capabilities. It can't discover new functions or interpret ambiguous instructions. If a capability isn't explicitly defined, the agent cannot perform it. This prevents the "emergent behavior" problem that makes unbounded agents unpredictable.
  • Fourth layer: DataStream validation. Business rules and structural validation applied before any action reaches physical systems. This is where domain knowledge and operational constraints are enforced. An action might be technically possible but operationally inappropriate—this layer catches that distinction.
  • Fifth layer: System safety interlocks. Hardware-level protections that exist independent of software. These cannot be overridden by any agent, cognitive or actuation. This is the ultimate safety boundary, operating regardless of what the software layers do.

Five layers of defense in depth. Each layer validates the layer above it. Mistakes get caught before they reach physical systems. The cost is measured in microseconds of latency. The value is measured in incidents prevented.

This is the domain where we can learn from industrial automation and control systems.

Learning from Industrial Automation

We have done this effectively for many years. This is the domain that we know well.

Industrial automation separated HMI from PLC from actuator for good reason. Each layer validates the layer above it. Safety interlocks cannot be overridden, even by operators with good intentions. Audit trails capture every decision for regulatory compliance and incident investigation. The patterns work. They've proven reliable over decades in hazardous environments.

The mapping to agentic AI is direct, and it reveals something important about what makes industrial AI fundamentally different from enterprise AI:

  • Cognitive agents MAGS = HMI/SCADA layer. They analyze data, identify patterns, suggest actions. But they don't execute directly. This separation means you can deploy increasingly sophisticated AI for decision-making without increasing execution risk.
  • Bounded actuation agents = PLC logic. They validate that proposed actions are safe, within defined parameters, and compliant with business rules. This layer is where AI capabilities meet operational constraints—where the system says "I understand what you want, but here's what's actually safe to do."
  • DataStreams = Safety interlocks. They enforce business rules and operational constraints that cannot be bypassed. This is the governance layer that prevents "governance drift"—where good intentions at deployment time erode under operational pressure.
  • Physical systems = Actuators. Motors, valves, equipment. These remain unchanged, protected by multiple layers of validation above them.

What this mapping enables is progressive AI deployment without progressive risk accumulation. You can make the cognitive layer smarter, more adaptive, more capable—and the validation layers below it continue enforcing the same safety boundaries. The risk profile doesn't grow with the capability profile.

Industrial automation learned this lesson through decades of operation in hazardous environments. We don't need to invent new safety paradigms. We need to apply existing ones to agentic AI.

This leads to a vision for the future of industrial control systems.

The IndustrialClaw Vision: Agentic Characteristics at the PLC Level

I could see a future, not too far, where we have IndustrialClaw instead of OpenClaw that will replace traditional PLCs. Not in their entirety, as we will still build the guardrails into that. But we will see much more characteristics of the agentic approach at the PLC level, supported by software-defined automation.

This represents a strategic inflection point. For the first time, we have three things converging simultaneously: cognitive AI that can reason about operations, execution patterns that users actually want, and industrial automation architectures mature enough to provide safety guarantees. Previous generations had one or two of these pieces, but not all three.

What this means in practice:

  • Adaptive logic that learns from operational patterns. PLCs that get smarter over time, not just faster. They identify inefficiencies, suggest optimizations, and implement approved changes—all within bounds that cannot be overridden.
  • Natural language interfaces for configuration and programming. Maintenance technicians describe problems in plain language. The system translates this into diagnostic routines, validates them against safety constraints, and executes only what passes all validation layers.
  • Real-time optimization based on changing conditions. The system doesn't just execute predetermined logic. It adapts to variations in feedstock quality, equipment performance, ambient conditions—optimizing within the safety envelope defined by bounded actuation.
  • Learning from anomalies and near-misses. Every deviation from normal operation becomes data for improving future decisions. But the learning happens in the cognitive layer, not the execution layer. The guardrails don't learn to be more permissive.

What it doesn't mean: removing PLCs entirely (proven hardware stays), eliminating safety interlocks (non-negotiable), or "AI does whatever it wants" (bounded by design). This is augmentation of proven systems, not replacement of them.

The timeline is not science fiction. Not decades away. We're experimenting with this in our labs now.

Think about what becomes possible. A maintenance technician describes a problem in natural language. The IndustrialClaw system translates this into diagnostic routines, identifies likely causes, suggests corrective actions. The bounded actuation layer validates these actions against safety constraints, operational limits, and business rules. The system executes only what passes all validation layers. Every action is logged with full provenance.

An operations engineer notices an efficiency opportunity. Describes the optimization objective in plain language. The system generates candidate control strategies, simulates them against the digital twin, evaluates them through multi-agent consensus, and implements the safest, most effective approach. Within the bounds of what the engineer has authorized. With rollback if results don't match predictions.

This is adaptive, learning, natural-language-configurable industrial control. With safety interlocks that cannot be bypassed. With audit trails that satisfy regulators. With governance frameworks that earn trust through demonstrated reliability.

Getting from experiment to production requires crossing the chasm.

Crossing the Chasm: Governance That Cannot Be Overridden

Moving this through the chasm from experiments to real production will require a governance framework that cannot be overridden.

The chasm between experimentation and production is wide. On the experiment side: OpenClaw, early adopters, personal risk tolerance, rapid iteration, "move fast and break things." On the production side: industrial operations, safety-critical environments, regulatory compliance, demonstrated reliability, "move deliberately and earn trust."

The gap is filled by governance frameworks, formal verification where possible, defense in depth everywhere else, complete audit trails, and progressive capability expansion. That is what we are experimenting with in our labs at the moment:

  • Phase 1: Read-only observation. The bounded actuation agent watches operations, builds understanding of normal patterns, identifies what's within scope of its capabilities. No actions. Just observation and learning. Establish the baseline before attempting any changes.
  • Phase 2: Notification-only actuation. The agent can send alerts and recommendations. It can notify operators of conditions that might require action. It validates communication paths and confirms its understanding matches operational reality. Still no direct execution. This phase tests whether the agent's recommendations align with what experienced operators would do.
  • Phase 3: Write operations on non-production systems. The agent can make changes to test environments. Every action goes through the five-layer validation architecture. We stress-test the validation layers, try to find edge cases, run red team attacks. Learn where the boundaries need adjustment before risking production impact.
  • Phase 4: Production pilot with restricted scope. One specific function, tightly bounded, carefully monitored. Full audit trails. Rollback capability ready. Human oversight at every step. Success is measured not by speed but by demonstrated safety. This phase proves the system can be trusted with real operations before expanding its authority.

Then we expand gradually. Each phase validates safety before expanding capabilities. Governance first, then additional capabilities. Trust is earned through demonstrated reliability.

Not "move fast and break things." Move deliberately and build trust. Each phase proves the system can be trusted before we grant it additional authority. This is how you cross the chasm from interesting experiments to production systems that run critical operations.

What We're Building Now

That is what we are experimenting with in our labs at the moment. We're at the intersection of three major trends that were developing independently and are now converging at bounded actuation.

  • Cognitive agent runtimes (MAGS, OpenAI Agents) provide multi-agent coordination, persistent memory, and the reasoning capability for complex operational decisions. We've built this. It's in production with industrial customers today.
  • Execution-first UX (the OpenClaw pattern) proved the market exists. Users want agents that act, not agents that just advise. The shift in expectations from chat to execution is real. OpenAI's acquisition validated the demand.
  • Industrial automation evolution through software-defined automation, digital twins, and edge intelligence proved we can deploy intelligent systems safely in hazardous environments. These patterns have decades of operational history.

The convergence of these three trends creates the opportunity for trustworthy autonomous operations at scale. But getting there requires more than combining existing pieces. It requires the bounded actuation layer that doesn't exist yet in any production system we've seen.

Our current lab work focuses on building that missing layer:

  • Bounded actuation agent framework with explicit capability enumeration. No emergent behaviors. No function discovery. If it's not explicitly defined, the agent cannot perform it.
  • Multi-layer validation architecture that maps directly to industrial automation principles. Five layers of defense in depth, each validating the layer above it.
  • Secrets management that prevents MoltBook-style breaches. No credentials stored in prompts. No direct database access. Validation layers between every decision and execution.
  • Progressive capability expansion following the Phase 1-4 pattern. Read-only observation, then notifications, then test system writes, then restricted production pilot. Safety validation at each gate before expanding authority.
  • Red team testing to find vulnerabilities before production deployment. Adversarial prompting, edge case generation, chaos engineering. Learn where the boundaries fail under stress.

The questions you need to answer for your organization are the same ones we're working through: Can you trust it in production? Can you explain it to regulators? Can you prove it won't make catastrophic decisions? The answers aren't found in buying smarter AI. They're found in building the right architecture.

What are you working on?


Pieter van Schalkwyk is the CEO of XMPro, specializing in industrial AI agent orchestration and governance. XMPro MAGS with APEX provides cognitive architecture and DecisionGraph capabilities for agent networks operating on existing industrial systems.

Our GitHub Repo has more technical information. You can also contact myself or Gavin Green for more information.

Read more on MAGS at The Digital Engineer