Let’s be real for a second—Agentic AI isn’t just a buzzword floating around research labs anymore. It’s actively making its way into our daily operational workflows. We are officially moving past the “what if” phase and dealing directly with what autonomous systems are already doing inside live enterprise environments.
But while the excitement is off the charts, there’s a catch.
To understand how organizations are navigating this massive transition, we recently surveyed senior IT and security leaders from across Cisco’s customer base [1]. The results confirmed that the momentum is very real: a massive 85% of these organizations are already experimenting with, piloting, or deploying agentic AI.
Yet, when you look at who has actually crossed the finish line, the numbers drop off a cliff. Only a tiny 5% of respondents report having AI agents in broad production.
Why the massive gap between piloting and production? It all comes down to trust. Business leaders are absolutely thrilled about the productivity gains agentic AI promises—projects that were previously shelved due to resource constraints are suddenly back on the table. But as we move from experimentation to execution, maturity is becoming the ultimate differentiator. The organizations that successfully scale AI won’t be the ones that move the fastest; they’ll be the ones that establish solid guardrails early and enforce them consistently. It is widely acknowledged by the analyst community that the lack of security controls for AI agents is becoming the fastest-growing and most significant security blind spot.
Security: The Ultimate Frenemy of AI Adoption
So, what exactly is causing this massive 85% vs. 5% chasm? Security is playing a fascinating dual role in the agentic era. It’s not an afterthought—it is simultaneously the biggest roadblock and a top strategic priority.
Our research shows that nearly 60% of security leaders view security concerns as the primary barrier to broader agentic AI adoption. At the exact same time, 29% rank securing agentic AI among their top three priorities for the coming year.
When we asked leaders what specifically keeps them up at night regarding AI agents, the answers were incredibly consistent:
- Agent access control
- Data exfiltration
- Agent autonomy and behavior
These aren’t just edge cases; they are structural risks. When you grant an AI system autonomy without clearly defined constraints, things can go sideways quickly. You aren’t just worried about what data the agent can access; you are worried about what actions the agent might take.
The Reality Check: Who is Actually Adopting Agentic AI?
Now that we know what’s holding people back, let’s look at where the successful adoption is actually happening. According to our research, North America is currently leading the charge, with 61% of respondents in the region piloting or producing agentic AI, followed by APJC (53%) and EMEA (48%).
When we break it down by industry, Financial Services, Technology, Manufacturing, and Healthcare are adopting it the fastest. These are highly regulated, complex industries that have a lot to gain from operational efficiency, but they also have the most to lose if things go wrong. This perfectly sets the stage for why that 5% production number is so low.
The Internal vs. External Divide: Why Customer-facing Agents are Stuck in Pilot
If we zoom in on that 5% of organizations that have actually pushed agentic AI into broad production, a fascinating pattern emerges. These successful deployments are almost entirely internal-facing. We’re talking about IT operations, Security Operations (SecOps), internal financial analysis, and R&D.
So, what about customer support?
In a lot of early industry chatter, customer support was touted as the ultimate use case for AI agents. But our survey data tells a slightly different story. While businesses were quick to adopt chatbots for customer-facing interactions, moving from a chatbot to a fully autonomous, customer-facing AI agent has made businesses pause. As our survey suggests, those customer-facing projects currently remain in the pilot phase. Very few have made it to full production.
Why the hesitation? Because the stakes are simply too high. Leaders are deeply worried about an agent’s non-deterministic behavior. When an AI agent is public-facing, it is exposed to the wild. Much like a public-facing web service, an external AI agent can be attacked, exploited, or “poisoned” by malicious inputs. Until organizations can guarantee that their customer-facing agents won’t be tricked into doing something harmful or off-brand, those external projects will remain in the sandbox.
Evolving Zero Trust: From “Who You Are” to “What You Do”
Let’s break down those top concerns using some of the insights from our advisory board. When we talk about access control and data exfiltration, identity management lies at the foundation.
But here is where things get interesting. Because agents act autonomously and non-deterministically, traditional IAM isn’t enough anymore. Zero Trust access principles must evolve. We need to mature from a human-centric, identity-based access model and transition toward an action-based access model.
It’s a two-way street. Think of it like this:
- You have to protect the agent from the world. You need to defend against supply chain risks, prevent the agent from being tricked, and ensure it doesn’t act beyond its intended scope. In fact, our survey explicitly highlighted these exact fears: respondents cited “agents acting beyond intended scope,” “agents being tricked,” and “agent supply chain risk” as major secondary barriers to adoption.
- You have to protect the world from the agent. You need strict security controls to consistently enforce access boundaries, ensuring your sensitive data, your customers, and your critical resources are safe from an agent that might go rogue and make a business process error.
You need these autonomous systems to work for you, not against you. And that requires security infrastructure that is embedded into identity, access, and behavioral controls from day one.
The “Who Owns This?” Problem
As agentic AI expands into operational systems, who is actually responsible for securing it? Right now, the answer is: everyone and no one.
Our research shows that decision-making and ownership are highly fragmented across the enterprise:
- 29% say the CISO owns it.
- 27% say the CIO or IT organization owns it.
- 24% point to a central AI committee.
- 11% admit there is no clear ownership at all.
This spread makes sense for the experimentation phase, but it’s a nightmare for production. When autonomy touches identity systems, operational infrastructure, and security workflows all at once, fragmented ownership dilutes accountability. The security team might manage access controls, IT oversees the infrastructure, and the AI team governs the models. Without unified oversight, your policies and your actual enforcement are going to be completely out of sync.
As agentic AI scales, clear accountability isn’t just a nice-to-have; it’s a structural requirement.
Closing the Control Gap Without Killing Innovation
Agentic AI is an operational reality. It is actively being woven into workflows that dictate IT uptime, security incident response, and internal financial health. The momentum is undeniable, but the control is still wildly uneven.
Closing this “control gap” doesn’t mean you have to hit the brakes on innovation. It just means you need to structure it.
Securing agentic AI means clearly defining non-human identities, enforcing least-privilege access, constraining behavior within approved boundaries, and continuously monitoring activity to contain the blast radius. When these elements work together, autonomy becomes manageable. When they operate in silos, your risk compounds.
The organizations that win in the agentic era won’t be the ones that rush to deploy the fastest. They will be the ones that embed dynamic guardrails early, ensuring their autonomous systems operate strictly within defined, observable, and safe limits.
Ready to safely scale your AI agents from pilot to production?
Don’t let the Agent Trust Gap slow down your innovation. Security shouldn’t be a barrier—it should be your foundation for autonomy. Discover how Cisco can help you build dynamic guardrails and enforce action-based Zero Trust for your autonomous systems.
Explore our Agentic AI Security Solution to learn more, or view the full survey results infographic.
We’d love to hear what you think! Ask a question and stay connected with Cisco Security on social media.
Cisco Security Social Media

Deixe o seu comentário