Everyone’s worried about AGI, but the real threat’s already here — bots with keys to the kingdom. Until we secure them, creds remain the weak spot.

AGI hype vs. today’s real risk
Artificial general intelligence (AGI) dominates the headlines. It is often painted as an existential risk: a system that sets its own objectives, operates without oversight and produces outputs we cannot explain. As a security leader, I share those concerns, but they are concerns about a future we have not reached yet.
The reality is that today’s most urgent problem looks very different. We do not yet have the tools to govern AGI, but we do have ways to build guardrails around the systems already in use. One of the most effective guardrails is controlling what AI agents can interact with: which services they communicate with, what data they access and under what conditions. That control ultimately comes down to managing their credentials, the non-human identities that underpin machine-to-machine communication.
And that is where today’s risk lies. Non-human identities — API keys, authentication tokens, certificates and cryptographic keys — already outnumber human identities. In some large-scale environments, the ratio of machine to human identities is 40,000 to 1. As someone who has led cryptography and enterprise security teams, I see this imbalance as the real battleground right now.
Why non-human identities are the weakest link
The data backs this up. According to the 2025 Verizon Data Breach Investigations Report, credential abuse is the top initial access vector, involved in 22% of breaches, and in North America, credentials factored into nearly a quarter of breaches. Attackers are not breaking in, they are logging in.
Identity has become a critical extension of the security perimeter and non-human identity is the newest, least-defended dimension of that perimeter. Recent events underscore how dangerous this blind spot is. In one widely reported incident, Lenovo’s chatbot was compromised when researchers demonstrated that a single malicious prompt could steal session cookies and access customer support systems. It shows how quickly things can go wrong when new technology is rolled out without the same security rigor as other enterprise systems, as well as why the next major incident may come from weaknesses in AI and non-human identities.
The analogy I often use with security leaders is the “hotel key” problem. When you issue a physical key to a guest, just as you issue a credential to an application or service, you immediately lose visibility and control. You do not know if the key has been copied, where it is being used or by whom. If a thief — the attacker — presents the same key, they are indistinguishable from the legitimate guest or trusted system.
And when you finally discover the problem, remediation is painful: you need to change the locks on every door, just as you would have to rotate thousands of credentials after a breach. That is exactly what it looks like when machine credentials are compromised.
Speedy AI adoption can be risky
At the same time, organizations are under pressure to accelerate AI adoption. According to a recent report, the number of S&P 500 companies disclosing board-level AI oversight increased by more than 84% between 2023 and 2024. Boards are paying attention and pushing for faster deployment.
But speed often comes at the expense of security. I have seen organizations strip away long-established controls to feed AI models more data. Tools to manage non-human identities are still in the making, which means many enterprises are running blind. And in security, a blind spot is not just a vulnerability, it is an open invitation for attackers.
The risks compound when you consider the scale. According to the SandboxAQ AI Security Benchmark Report 2025, only 6% of organizations have reached an AI-native security posture, with protections integrated across both IT and AI systems. That means very few have effective controls in place for governing the credentials their AI agents rely on, creating a massive and growing attack surface without guardrails.
Part of the problem is that regulations and frameworks have not kept pace with advances in AI. There are still no widely accepted standards for managing AI agents or machine identities and basic questions remain unresolved. If an AI agent causes harm, who is responsible: the agent, the developer or the person who gave the prompt?
Without governance and identity management working hand in hand, enterprises are essentially gambling. We have seen this before: the 2017 Equifax breach was tied to a missed patch and the more recent Storm-0558 attack exploited a stolen key from a crash dump. The lesson is consistent: Credentials are a weak link and yet we continue to treat them as an afterthought.
What security leaders must do now
Get complete visibility
Build a real-time inventory of every key, certificate and secret. You cannot protect what you cannot see. Many organizations underestimate how many non-human identities they have and those hidden identities often become the attacker’s entry point.
Visibility should cover not just the assets themselves but also the connections between them — which applications rely on which keys and where those secrets are stored. Without this understanding, security teams remain reactive instead of proactive.
Automate the lifecycle
Manual credential management cannot keep up with systems that live only seconds. Provisioning and rotation must be automated to limit the window an attacker has to exploit a stolen credential. Short-lived credentials are effective only when they can be issued and replaced continuously. In practice, this requires integration with the same cloud and devops tools that development teams already use.
Protect the last mile
Isolate secrets from applications so they are never directly exposed. Even the best vault does not help if a secret can be pulled out of memory or logged in plain text once an application retrieves it. Last-mile protection shifts trust away from vulnerable endpoints and into hardened cryptographic services that can sign or verify without ever releasing the underlying key.
In short: start with visibility, move to automation and finish with isolation.
Keep the real danger in sight
The battleground has shifted. It is no longer in the networking layer and not yet AGI. It is in the identity space, especially the non-human ones, that quietly outnumber us by tens of thousands to one.
Attackers have already adapted. They are not breaking down the walls, they are simply logging in with legitimate credentials. Until we catch up, we are gambling with our enterprises. AGI may dominate the conversation, but the immediate, clear and present danger is unsupervised non-human identity.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?