AI adoption is forcing enterprises to make a critical trade-off between the imperative for speed and the fundamental tenets of identity security, with identity controls emerging as the primary casualty, according to a recent and revealing report from Delinea, a prominent provider of identity security solutions designed for both human and increasingly prevalent AI agent identities. The 2026 Identity Security Report, which surveys over 2,000 IT decision-makers actively engaged in using or piloting AI, presents a stark picture: a staggering 90% of organizations are compelling their security teams to relax identity controls specifically for AI initiatives. This widespread prioritization of rapid deployment over robust security measures is driven by leadership’s intense focus on accelerating AI adoption to unlock perceived productivity gains and competitive advantages, inadvertently exposing organizations to unprecedented levels of cyber risk.
The Unfolding Crisis: Speed Over Security
The core of the problem lies in this aggressive fast-tracking of AI initiatives without commensurate advancements in security infrastructure. Enterprises are forging ahead with AI deployments despite significant, acknowledged gaps in their ability to discover, monitor, and control the privileges associated with AI identities. This creates a fertile ground for security vulnerabilities, turning the promise of AI into a potential Achilles’ heel for many organizations. As Art Gilliland, CEO of Delinea, succinctly put it, "The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk." His statement underscores a growing concern within the cybersecurity community: the rapid technological leap of AI is outpacing the defensive capabilities designed for a different era of computing.
The report’s findings are particularly alarming given the current cybersecurity landscape, which already grapples with an ever-expanding attack surface from traditional human and machine identities. The introduction of AI agents, often with broad permissions and complex interactions, further exacerbates this challenge. The survey revealed that 90% of respondents identified at least one identity visibility gap within their organizations. Critically, the largest and most concerning of these gaps was directly tied to machine and non-human identities (NHIs), a category that prominently includes accounts utilized by AI agents.

The Rise of Non-Human Identities: A New Frontier for Risk
The proliferation of non-human identities, encompassing everything from IoT devices and robotic process automation (RPA) bots to sophisticated AI algorithms and large language models, represents a paradigm shift in identity management. Unlike human users, NHIs often operate autonomously, at machine speed, and can access vast repositories of data and systems without direct human supervision for extended periods. This makes their identities particularly attractive targets for malicious actors.
"As AI agents multiply across enterprise environments, these identities often have the least oversight," Gilliland noted, highlighting a critical vulnerability. The inherent complexity and sheer volume of these non-human identities make traditional, human-centric identity and access management (IAM) frameworks inadequate. Managing a few hundred human employees is vastly different from overseeing thousands, or even millions, of interconnected AI agents and machine identities, each with unique access requirements and potential vulnerabilities. Without proper oversight, these identities can become backdoor entry points for attackers, enabling lateral movement within networks, data exfiltration, or even the manipulation of critical business processes. The report advocates that "the organizations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity," emphasizing the urgent need for a holistic and adaptive approach to identity security.
Historical Context: From Human-Centric to Hybrid Identity Management

The evolution of identity and access management has historically centered on human users. Early IAM systems focused on managing employee accounts, passwords, and permissions within defined network perimeters. With the advent of cloud computing, mobile devices, and the Internet of Things (IoT), the concept of "identity" began to broaden, encompassing external users, partners, and an increasing number of connected devices. Machine identities, in the form of service accounts, APIs, and digital certificates, have steadily grown in prominence, introducing new layers of complexity.
However, the current wave of AI adoption is not merely an incremental increase in machine identities; it represents a fundamental shift. AI agents are not just static machines; they are dynamic, often learning, and capable of initiating actions and making decisions. This demands a level of identity governance that goes beyond simple authentication and authorization. It requires continuous monitoring of behavior, dynamic adjustment of privileges based on context (e.g., time of day, location, data sensitivity, task being performed), and the ability to detect and respond to anomalous activity in real-time. The "2026 Identity Security Report" serves as a prescient warning, indicating that by the mid-decade, this challenge will reach a critical inflection point if current trends persist.
The Broader Implications: An Exponentially Larger Attack Surface
The cumulative effect of loosened identity controls, widespread visibility gaps, and the proliferation of unsupervised AI agents is an exponentially larger and more complex attack surface. Cybercriminals are constantly seeking the path of least resistance, and the current rush to deploy AI without adequate security provides them with numerous new avenues for exploitation.

Consider the potential scenarios:
- Data Breaches: An AI agent with overly broad access permissions, if compromised, could be used to exfiltrate sensitive customer data, intellectual property, or financial records with unprecedented speed and volume.
- System Manipulation: Malicious actors could gain control of AI agents to disrupt critical infrastructure, manipulate financial markets, or spread disinformation.
- Supply Chain Attacks: If an AI agent belonging to a third-party vendor is compromised due to lax identity controls, it could serve as a conduit for attacks on the primary organization’s network, creating a cascading effect across an interconnected ecosystem.
- Insider Threats (Accidental or Malicious): An employee might inadvertently grant an AI agent excessive permissions, or a disgruntled insider could intentionally exploit such vulnerabilities.
The report concludes that traditional identity protections have simply not kept pace with the rapid advancements and adoption of AI. This disconnect creates a dangerous environment where companies, by allowing their security controls to grow lax, are inadvertently inviting more identities and access points for exploitation.
Expert Perspectives and Industry Responses
The concerns raised by Delinea resonate deeply within the broader cybersecurity community. Leading analysts and security professionals have long advocated for robust identity governance as the cornerstone of any effective security strategy. The principle of "Zero Trust," which dictates that no user or device, whether inside or outside the network, should be implicitly trusted, gains even greater urgency in an AI-driven environment. Every AI agent, every machine identity, must be continuously authenticated, authorized, and validated.

Industry bodies like the National Institute of Standards and Technology (NIST) and the Cloud Security Alliance (CSA) are actively developing frameworks and guidelines for AI security, acknowledging the unique challenges posed by these emerging technologies. While specific "official responses" to the Delinea report from governmental bodies are not explicitly stated, the general trend indicates a growing regulatory interest in AI governance, data privacy, and accountability. Future regulations are likely to mandate stronger security controls around AI systems, potentially penalizing organizations that fail to uphold adequate safeguards. This means that companies cutting corners on security now may face significant compliance hurdles and financial penalties in the near future.
Beyond regulatory concerns, the market itself is beginning to demand more secure AI solutions. Enterprises are increasingly looking for AI platforms and services that come with built-in security features, robust identity management capabilities, and transparent auditing mechanisms. This growing demand will likely drive innovation in the cybersecurity sector, pushing vendors to develop more sophisticated tools specifically designed to manage and secure AI identities.
Navigating the AI Security Conundrum: A Path Forward
While the Delinea report highlights a critical problem, it also implicitly points towards necessary solutions. Organizations cannot afford to halt AI adoption, but they must fundamentally rethink their security posture. As Delinea emphasizes, "identity security must evolve alongside AI adoption." This evolution requires a multi-faceted approach:

- Rethinking Identity Security Frameworks: Move beyond human-centric models to embrace a universal identity framework that can manage, govern, and secure all identity types – human, machine, and AI agent – under a unified policy engine.
- Implementing AI-Specific Identity Governance: This includes:
- Comprehensive Discovery and Inventory: Organizations must have a clear understanding of every AI agent operating within their environment, what it does, and what data it accesses.
- Real-time Monitoring and Behavioral Analytics: Continuously monitor the behavior of AI agents to detect anomalies that might indicate compromise or misuse. Leverage AI for security to analyze the vast streams of data generated by other AI systems.
- Contextual Access Policies: Implement dynamic access controls that grant AI agents only the minimum necessary privileges, adjusted in real-time based on the context of their operations.
- Privileged Access Management (PAM) for AI: Treat AI agents with privileged access with the same, or even greater, scrutiny as human administrators. Securely manage their credentials and enforce strict controls over their elevated permissions.
- Integrating Security into the AI Development Lifecycle (DevSecOps): Security cannot be an afterthought. It must be embedded from the design phase through development, deployment, and ongoing operation of AI systems. This means involving security teams early and often in AI projects.
- Upskilling Security Teams: The cybersecurity workforce needs to develop specialized knowledge and skills related to AI architectures, machine learning models, and the unique security challenges they present. Continuous education and training are paramount.
- Leveraging Automation for Security: The scale and complexity of AI identities necessitate automated security solutions. AI-driven security tools can help manage, monitor, and respond to threats at a speed and scale that human teams alone cannot match.
- Adopting a Zero Trust Philosophy: Assume no identity, human or non-human, is inherently trustworthy. Verify every access request, enforce least privilege, and continuously monitor for suspicious activity.
Conclusion: The Imperative for Integrated Security in the AI Era
The Delinea 2026 Identity Security Report serves as a critical wake-up call, underscoring the precarious balance organizations are attempting to strike between innovation and security. The allure of AI’s transformative potential is undeniable, but sacrificing foundational security principles, particularly identity controls, for the sake of speed is a Faustian bargain with potentially catastrophic consequences.
As AI continues to embed itself deeper into enterprise operations, breaking traditional security models, the imperative to adapt and evolve security strategies becomes undeniable. Sustainable and responsible AI adoption hinges on a proactive, integrated security strategy that places identity protection for all entities – human, machine, and AI agent – at its core. The future success of organizations in the AI era will not only depend on their ability to harness AI’s power but, more fundamentally, on their commitment to securing it, ensuring that innovation does not come at the cost of vulnerability. The choice is clear: build security in, or face the inevitable fallout.




