AI Agents and Non-Human Identities Creating Critical Security Gaps, Report
Keeper Security report finds non-human identities and AI agents creating critical security gaps in enterprise
Summary
A Keeper Security report released at RSA Conference 2026 reveals that 46% of companies grant AI-powered tools access to sensitive data without proper governance, and 76% lack consistent privileged access policies for non-human identities (service accounts, API keys, automation). Only 28% of security teams have full visibility into these identities across cloud, office, and SaaS environments, yet 40% of surveyed experts acknowledged security incidents involving machine credentials in the past year.
Full text
Artificial IntelligenceAI Agents and Non-Human Identities Creating Critical Security Gaps, Report New research from Keeper Security, reveals non-human identities and automated system-to-system interactions are becoming the top security risk for businesses in 2026. byDeeba AhmedApril 7, 20262 minute read Businesses are rushing to adopt automation, but they are leaving a significant security gap in their infrastructure as new data suggests this technological race is moving much faster than the security needed to protect it. On 7 April 2026, password security firm Keeper Security released a report at the RSA Conference in San Francisco, according to which many companies are failing to manage non-human identities (NHIs). These are basically software-based assets, such as service accounts, API keys, and AI-powered tools, that allow system-to-system interactions without any human involvement. The research, shared exclusively with Hackread.com, surveyed 109 cybersecurity experts and found a worrying trend: nearly half (46%) of companies now give AI-powered tools access to their most sensitive data and critical systems, and despite this, 76% of these organisations do not have consistent rules to govern these identities under privileged access policies. In short, software is being granted excessive privileges without any real supervision. A Blind Spot One of the biggest problems identified by Keeper Security researchers is a simple lack of visibility. Only 28% of the professionals surveyed said they can actually see every non-human identity across their cloud, office, and Software as a Service (SaaS) environments, which is a major concern. Furthermore, 53% of experts view this “lack of visibility into AI, automation, and machine access” as their top security risk, research revealed. For your information, without a clear view of these connections, security teams cannot enforce least-privilege access. This is the basic security rule where a machine is only given the absolute minimum level of access needed to do its job, but many companies are, instead, managing these digital identities using a messy, fragmented mix of different tools and teams. Security Breaches on the Rise These gaps aren’t just theoretical; they are already causing real-world damage. The report reveals that over 40% of the experts questioned admitted their company suffered a security incident involving machine credentials or NHIs in the past year, while another 32% were not even sure if they had been hit or not. That is a massive detection gap. According to researchers, only 26% of companies use automated detection and response to watch over what these machines are doing, and most still rely on slow, manual processes. Darren Guccione, the CEO of Keeper Security, noted that this “shift introduces new complexity around identity” and requires a unified approach where a software platform combines password management and secrets control to keep data safe. This clearly proves that managing AI Agents should now become a top priority to stop hackers from executing a major data breach. Deeba Ahmed Deeba is a veteran cybersecurity reporter at Hackread.com with over a decade of experience covering cybercrime, vulnerabilities, and security events. Her expertise and in-depth analysis make her a key contributor to the platform’s trusted coverage. View Posts Agebtic AIAIArtificial IntelligenceCybersecurityTechnology Leave a Reply Cancel reply View Comments (0) Related Posts Read More Artificial Intelligence Technology Execution Before Intelligence: Architectural Foundations for AI-Driven GRC Artificial Intelligence (AI) in Governance, Risk, and Compliance (GRC) is no longer in the “shake it and see… byAndrii Ukrainets Read More Artificial Intelligence The Future of AI in Workplace Management Discover how artificial intelligence is shaping the future of workplace management, from optimizing efficiency to enhancing employee experience.… byOwais Sultan Technology Artificial Intelligence AI-based Model to Predict Extreme Wildfire Danger This hybrid method can provide improved predictions from one week before the fire using finer scales (4kmx4km resolution). byDeeba Ahmed Read More Artificial Intelligence Data Breaches Security Survey: Rapid AI Adoption Causes Major Cyber Risk Visibility Gaps As software supply chains become longer and more interconnected, enterprises have become well aware of the need to… byOwais Sultan