The rapid integration of artificial intelligence (AI) across enterprise environments is compelling organizations worldwide to make a critical, and often perilous, trade-off between the imperative for speed and the foundational principles of identity security. This alarming trend, where identity controls are increasingly becoming the first casualty in the race to deploy AI, has been starkly highlighted in a new report from Delinea, a prominent provider of identity security solutions catering to both human and AI agent identities. The "2026 Identity Security Report," a comprehensive analysis of the evolving cybersecurity landscape, reveals that a staggering 90% of organizations are actively pressuring their security teams to loosen identity controls specifically for AI initiatives. This prioritization of rapid AI adoption, driven by the desire for immediate productivity gains, is inadvertently exposing enterprises to unprecedented levels of security vulnerability, creating an exponentially larger attack surface for malicious actors.
The Delinea Report: A Clarion Call on AI Security Gaps
The findings from Delinea’s extensive survey, which gathered insights from over 2,000 IT decision-makers already utilizing or piloting AI technologies, paint a sobering picture. Organizations are fast-tracking their AI deployments despite critical, acknowledged gaps in their ability to discover, monitor, and control the privileges associated with AI identities. This strategic oversight, or perhaps calculated risk, is creating a fertile ground for future security incidents. Art Gilliland, CEO of Delinea, articulated the gravity of the situation, stating, "The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk." His comments underscore a growing chasm between technological ambition and security preparedness.

A particularly concerning revelation from the report is that 90% of respondents identified at least one identity visibility gap within their infrastructure. The most pronounced of these gaps was consistently tied to machine and non-human identities (NHIs), a category that prominently includes the accounts and processes utilized by AI agents. As AI systems become more autonomous and pervasive, operating across myriad applications and data repositories, the sheer volume and dynamic nature of these NHIs introduce complexities that traditional identity management frameworks are ill-equipped to handle. Gilliland further emphasized this point, noting, "As AI agents multiply across enterprise environments, these identities often have the least oversight." He advocated for a proactive stance, asserting that "The organizations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity." The report unequivocally concludes that traditional identity protections have not evolved at the necessary pace to counter the unique challenges posed by AI, and that relaxing these controls offers bad actors a significantly expanded attack surface. It projects that AI will continue to disrupt established security models as companies inadvertently allow their security posture to become lax in the face of burgeoning identities and access points.
The Impetus for Speed: Why Enterprises Are Rushing AI Adoption
The widespread rush to integrate AI is not arbitrary; it is a strategic imperative driven by a confluence of business and market forces. In an increasingly competitive global landscape, enterprises view AI as a transformative technology capable of delivering substantial advantages.
- Productivity and Efficiency Gains: AI promises to automate mundane tasks, optimize complex processes, and enhance decision-making through advanced analytics. From predictive maintenance in manufacturing to personalized customer service via chatbots, AI offers tangible improvements in operational efficiency, leading to significant cost reductions and accelerated output.
- Innovation and Competitive Edge: Companies are leveraging AI to develop new products and services, uncover novel insights from vast datasets, and gain a competitive edge. The fear of being left behind (FOMO – Fear Of Missing Out) in the innovation race is a powerful motivator, pushing organizations to adopt AI rapidly, sometimes at the expense of thorough security vetting.
- Data-Driven Decision Making: AI’s ability to process and analyze massive amounts of data far surpasses human capabilities. This enables businesses to derive deeper insights, predict market trends, and make more informed strategic decisions, directly impacting profitability and market share.
- Customer Experience Enhancement: AI-powered tools, such such as recommendation engines, virtual assistants, and sentiment analysis platforms, are revolutionizing customer interactions, leading to improved satisfaction and loyalty.
- Investor and Market Pressure: Boards of directors and investors often demand clear strategies for AI adoption, viewing it as a key indicator of a company’s future viability and growth potential. This external pressure can trickle down to IT and security teams, creating an environment where speed trumps caution.
This relentless drive for AI integration, while understandable from a business perspective, has inadvertently created a security vacuum, particularly around the management of the identities that AI systems assume and interact with.

The Expanding Attack Surface: A Critical Vulnerability Landscape
The Delinea report vividly illustrates how the loosening of identity controls for AI is directly contributing to an exponentially larger attack surface. This phenomenon can be understood through several key vectors:
- Proliferation of Machine and Non-Human Identities (NHIs): Unlike human users, who typically have one or a few accounts, AI agents, bots, microservices, and other automated processes can generate hundreds or even thousands of identities. Each of these NHIs, whether it’s an API key for a generative AI model, a service account for a machine learning pipeline, or a cryptographic certificate for an IoT device connected to an AI system, represents a distinct identity that requires robust management. These identities often operate without direct human oversight, making their compromise harder to detect.
- Lack of Visibility and Discovery: As highlighted by Delinea, a significant challenge is simply knowing what AI agents exist within an enterprise environment, what permissions they have, and what data they can access. Without comprehensive discovery and inventory, organizations cannot effectively monitor or secure these identities, leaving blind spots that attackers can exploit.
- Inadequate Privilege Management: Many organizations extend broad, overly permissive access to AI agents to facilitate rapid deployment and functionality. This "least privilege" principle, a cornerstone of cybersecurity, is frequently neglected for NHIs. An AI agent granted excessive privileges can, if compromised, become a powerful conduit for data exfiltration, system manipulation, or lateral movement within a network.
- Dynamic and Ephemeral Nature of AI Workloads: Modern AI applications often leverage containerization, serverless functions, and dynamic cloud environments, leading to identities that are created, used, and destroyed rapidly. Managing access for such ephemeral identities requires advanced, automated solutions that many traditional identity and access management (IAM) systems lack.
- Supply Chain Risks from AI Models: AI models are frequently built using third-party components, open-source libraries, and pre-trained models. Each integration point introduces potential vulnerabilities if the identities and access controls within the AI supply chain are not rigorously vetted. A compromised third-party model could inject malicious code or data, leveraging its identity to bypass internal controls.
- Sophistication of AI-Powered Attacks: Ironically, the very technology being deployed is also being weaponized by adversaries. AI can be used to generate highly convincing phishing attacks, automate reconnaissance, and identify vulnerabilities more efficiently. If an AI agent’s identity is compromised, it can be repurposed by attackers to launch sophisticated, targeted attacks that mimic legitimate internal activity, making detection exceedingly difficult.
These factors combine to create a landscape where the perimeter of an organization’s digital defenses is not just expanding, but also becoming more porous and complex, fundamentally challenging existing security models.
Supporting Data and Industry Context: The Broader Cyber Threat Landscape

The concerns raised by Delinea are not isolated; they resonate with broader trends observed across the cybersecurity industry. According to IBM’s 2023 Cost of a Data Breach Report, the global average cost of a data breach reached $4.45 million, a 15% increase over three years. Identity-related incidents, such as compromised credentials and phishing, consistently rank among the most common and costly initial attack vectors. The report also found that breaches involving AI or automation cost significantly less to resolve, highlighting the potential for AI to help security, but also implicitly showing the dangers if AI itself is the vulnerability.
Gartner predicts that by 2026, 80% of organizations will have adopted some form of generative AI, demonstrating the pervasive nature of this technology. However, a significant portion of these deployments are occurring without adequate security frameworks. The Verizon Data Breach Investigations Report (DBIR) consistently points to stolen credentials and phishing as leading causes of breaches, underscoring the critical importance of robust identity management. As the number of non-human identities is projected to outnumber human identities by a significant margin in the coming years, the sheer scale of the identity challenge becomes apparent. Analyst firms estimate that organizations currently struggle to track even a fraction of their machine identities, leaving vast swathes of their digital infrastructure unsecured.
Furthermore, the regulatory environment is beginning to catch up to the rapid pace of AI adoption. Frameworks like the NIST AI Risk Management Framework (AI RMF) provide guidance on managing risks associated with AI, including security. The European Union’s AI Act, while primarily focused on ethical AI and safety, also touches upon security requirements for high-risk AI systems. Future regulations are likely to impose stricter requirements on the secure development and deployment of AI, particularly concerning identity and access management, making the current lax approach a potential compliance nightmare.
The Nuances of Identity Security in the AI Era: A Path Forward

Addressing the "speed vs. security" dilemma requires a nuanced and comprehensive approach that acknowledges the unique characteristics of AI identities.
- Embracing Zero Trust for NHIs: The principle of "never trust, always verify" is paramount for AI agents. Every machine identity, regardless of its location or perceived trustworthiness, must be authenticated and authorized continuously. This means moving away from static, broad permissions to dynamic, context-aware access policies that adapt in real-time based on the AI agent’s activity, role, and current security posture.
- Advanced Discovery and Inventory: Organizations must invest in tools and processes capable of continuously discovering, cataloging, and monitoring all AI agents and their associated identities across on-premises, cloud, and hybrid environments. This includes understanding what AI models are running, where they are running, what data they are accessing, and what privileges they possess.
- Privileged Access Management (PAM) for Machines: Extending PAM principles to machine identities is crucial. This involves managing secrets (API keys, tokens, certificates), enforcing just-in-time access, rotating credentials automatically, and session recording for NHIs. Dedicated machine identity management solutions are becoming indispensable.
- Identity Governance and Administration (IGA) for AI Workflows: AI governance must integrate with IGA frameworks to ensure that the lifecycle of AI identities—from provisioning to de-provisioning—is managed effectively, with appropriate approvals, audits, and compliance checks.
- Behavioral Analytics and Anomaly Detection: Leveraging AI itself to secure AI is a promising avenue. AI-powered security tools can analyze the behavior of human and machine identities, establish baselines, and detect anomalous activities indicative of a compromise, enabling proactive threat detection and response.
- Security by Design in AI Development: Integrating security considerations, including identity management, into the AI development lifecycle (MLSecOps) from the outset is far more effective and less costly than retrofitting security measures later. This includes secure coding practices for AI models, robust authentication for model APIs, and secure deployment pipelines.
- Integrated Identity Platforms: The fragmentation of identity management solutions is a major hurdle. Organizations need integrated platforms that can provide a unified view and control plane for all identities—human, machine, and AI agent—across their entire digital estate.
Broader Implications and the Future Outlook
The implications of neglecting identity security in the pursuit of AI speed extend far beyond individual organizational breaches.
- Economic Disruption: Large-scale AI-related breaches could lead to significant economic disruption, impacting critical infrastructure, financial markets, and supply chains. The reputational damage and financial penalties from regulatory bodies could cripple businesses.
- National Security Concerns: Nation-state actors and sophisticated criminal organizations are actively targeting AI systems. Compromised AI agents in government, defense, or critical infrastructure could have catastrophic consequences, ranging from espionage to direct operational disruption.
- Erosion of Trust: A series of high-profile AI security incidents could erode public trust in AI technology itself, hindering its broader adoption and stifling innovation.
- Talent Gap Intensification: The cybersecurity talent gap is already severe. The specialized skills required to secure complex AI systems and manage diverse machine identities will further strain existing resources, necessitating significant investment in training and education.
Delinea’s "2026 Identity Security Report" serves as an urgent wake-up call. While the necessity of AI adoption is undeniable, the study unequivocally indicates that identity security cannot be an afterthought or a sacrificed component in the race to deploy. The short-term gains in speed achieved by loosening identity controls will inevitably be dwarfed by the long-term costs of breaches, reputational damage, and regulatory non-compliance. Organizations that proactively address the unique challenges of AI identity management, embracing a security-first mindset and investing in integrated, adaptive identity solutions, will be the ones best positioned to harness the full potential of AI securely and sustainably. The future of AI success hinges not just on innovation, but critically, on impenetrable identity security.




