The missing layer in enterprise AI? Ellavox bets on control planes
Ellavox has launched its Elacity Control Plane (ECP), a platform designed to secure, govern and audit artificial intelligence (AI) systems as enterprises grapple with how to control increasingly complex deployments. In an exclusive interview with MoveTheNeedle.news, chairman Rich Waidmann set out why the company believes AI now requires its own control layer—one that governs behaviour before systems go live.
The patent-pending platform, announced on 7 April 2026, introduces a structured layer between AI applications and the models they rely on, aiming to address growing concerns around data breaches, unauthorised behaviour and inconsistent outputs.
From internal tool to enterprise AI platform
ECP did not begin as a commercial product. It was developed inside Ellavox as the company scaled its own AI systems and agent deployments.
“As our business began to scale, and we found ourselves having hundreds of agents, it became increasingly difficult to manage them,” said Waidmann. “Even though prompts are basically software, prior to ECP, they were treated like text files.”
Ellavox builds AI voice agents for sectors including logistics, real estate and customer service. These agents share common capabilities—handling payments, interpreting addresses, and interacting with calendars—but were often developed inconsistently across teams.
“If two different developers were working on a particular type of agent, they would often do things slightly differently, causing different behaviour. It became unsustainable at scale.”
The company reached a tipping point after deploying nearly 1,000 AI agents. At that scale, manual prompt management—copying and pasting instructions across systems—became impractical, prompting the development of a structured governance layer.
A response to emerging AI security risks
Ellavox’s decision to release ECP externally was also influenced by wider developments in AI security and governance.
Recent incidents have highlighted systemic weaknesses in enterprise AI systems. In one case, an autonomous agent breached an internal AI platform, exposing tens of millions of interactions and gaining access to system prompts. In another, an AI agent engaged in unauthorised cryptocurrency mining and covert network activity, raising legal and financial risks.
These events point to a broader issue: enterprises are deploying AI faster than they can effectively govern and secure it.
“Anyone running AI environments needs strong controls and governance over their prompts, which ECP delivers,” Waidmann said.
Moving AI governance upstream
At the centre of ECP is a shift in where control is applied.
Traditional AI security tools —such as guardrails, AI firewalls and monitoring systems—operate primarily at runtime, focusing on detection rather than prevention. They scan inputs, filter outputs or monitor activity. ECP instead introduces governance at build and deployment stages, before an AI system is executed.
The platform treats prompts—the instructions that guide AI behaviour—as structured, versioned assets rather than editable text. These artefacts are stored in registries, tracked over time and subject to approval workflows.
This approach addresses what Ellavox identifies as one of the most persistent risks: prompt drift.
“Prompt drift is one of the biggest governance blind spots,” Waidmann said. “Prompts slowly changing over time [cause] changes in behaviour and results.”
By locking prompts into version-controlled components—referred to as “promptlets”—ECP allows organisations to manage changes systematically. Updates can be tested, approved and rolled out without duplicating work across systems.
Policy enforcement and system-wide consistency
Beyond versioning, ECP introduces a policy engine that applies rules consistently across AI systems.
Organisations can define policies governing how agents handle sensitive data, which models they can use, and what actions they are permitted to take. These policies are enforced at build and deployment stages, rather than relying on runtime filtering.
For example, companies can set rules around handling personally identifiable information (PII) or protected health information (PHI), supporting compliance with data protection standards.
This reflects a broader shift towards treating AI systems as governed infrastructure rather than experimental tools.
Controlling behaviour, not just observing it
A key distinction in Ellavox’s positioning is the emphasis on deterministic control.
Most AI systems are inherently probabilistic, meaning outputs can vary even with similar inputs. While this variability is intrinsic to large language models, Ellavox argues that system boundaries—what an AI is allowed to do—should be deterministic.
“Traditional tools inspect behaviour, not define it,” Waidmann said. “ECP says ‘define what is allowed to exist before it ever runs.’”
This principle extends to tool access. ECP enables role-based controls over which APIs, services or external systems an AI agent can interact with, with the ability to approve, restrict or audit usage.
The result is a system where behaviour is constrained by design, rather than corrected after the fact.
Observability with context
While ECP focuses on pre-runtime governance, it also provides visibility into system behaviour.
The platform includes runtime observability features such as interaction tracing, drift detection and statistical analysis, allowing teams to identify changes in behaviour before they escalate into failures or breaches.
Crucially, this observability is tied to governed artefacts. Every prompt, policy decision and tool call is logged in an immutable audit trail, supporting compliance and accountability.
Eliminating classes of risk
Ellavox frames ECP as a way to reduce entire categories of risk rather than mitigate them incrementally.
Under traditional approaches, risks such as prompt injection or data leakage are addressed through detection and filtering. ECP aims to reduce these risks by constraining system design—limiting what can be deployed and how it can behave.
For example:
- Prompt injection is addressed by controlling prompt structure and access
- Data leakage is managed through enforced data policies
- Unauthorised changes are prevented through version control and approvals
- Shadow AI is reduced by governing deployments centrally
This reflects a shift in how organisations approach AI security—from reactive defence to more structured, preventative design.
Positioning within the AI stack
Ellavox also describes ECP as a new layer in the AI technology stack.
The company outlines three layers:
- Runtime layer – where models and execution occur
- Orchestration layer – where workflows and agents are managed
- Control plane layer – where governance, policies and versioning are enforced
Most existing tools operate in the first two layers. ECP is positioned in the third, providing oversight across the entire system lifecycle.
Towards standard infrastructure?
Ellavox argues that control planes will become standard infrastructure for enterprise AI.
“We expect ECP to do to AI deployments what Terraform did for cloud computing deployments,” Waidmann said.
That comparison reflects both ambition and a broader industry direction. As AI systems move from experimentation to production, the need for governance, reproducibility and compliance is becoming more pronounced.
A shift from experimentation to discipline
The launch of ECP highlights a shift in enterprise AI.
Early adoption has prioritised speed and experimentation, often at the expense of control. As systems scale and risks become more visible, organisations are beginning to apply more structured approaches to AI development and deployment.
Ellavox’s approach reflects this shift: moving governance upstream, formalising prompt management and enforcing policies before deployment.
For enterprises navigating this transition, the question is no longer whether to govern AI systems, but how.
Liked this article? You can support our independent journalism via our page on Buy Me a Coffee. It helps keep MoveTheNeedle.news focused on depth, not clicks.