Rick Grinnell
Contributor

Can cybersecurity pros prevent impending AI attacks?

Opinion
Oct 14, 20255 mins
Cloud SecurityCyberattacksNetwork Security

Security teams are racing to combat AI-driven attacks with more sophisticated tools and enhanced control over their own AI.

computer security
Credit: 13_Phunkod

In the conversations I’ve been having with CISOs over the past few months, there has been a notable shift. Where once we discussed traditional threat vectors and compliance frameworks, the focus has now moved to a more complex challenge: defending against AI-powered attacks while integrating AI tools into their own security operations.

The numbers tell the story. The recent Thales Data Threat Report found that 73% of companies are investing over $1M annually on AI-specific security tools, yet 70% find that the frenetic pace of AI development is their leading security concern. This tension is forcing CIOs and security leaders to rethink their entire approach to enterprise defense.

Traditional attacks meet AI amplification

Recent breaches are prompting security leaders to become increasingly concerned. Consider the LexisNexis Risk Solutions incident from December 2024, in which the company suffered a breach allowing hackers to access data from over 364,000 individuals through a compromised third-party platform. Or McLaren Health Care’s second major ransomware attack in a year, affecting 743,000 individuals. These aren’t just data points — they represent a troubling acceleration in both scale and sophistication.

Key recent incidents:

  • LexisNexis Risk Solutions (Dec 2024): 364,000+ records compromised via third-party platform breach
  • McLaren Health Care (July-Aug 2024): Second major attack in 12 months, 743,000 affected
  • Aflac network breach (June 2025): Sophisticated social engineering, no ransomware, but data exfiltration as part of a ‘cybercrime campaign’ against the insurance industry.
  • UNFI cyberattack (June 2025): Operational disruption affecting this food supplies distributor for major grocery supply chains
  • Salesforce data breach (August 2025): Widespread theft of customer CRM data via compromised third-party Drift authentication tokens (via Threat Intel)  

AI capabilities are amplifying traditional attack vectors and making it easier to exploit vulnerabilities. In my conversations with security executives, they’re seeing attackers use AI in particular to personalize and socially engineer phishing campaigns at unprecedented scale and speed.

The AI-DR revolution: New tools for new threats

Organizations are adopting AI-DR (AI detection and response) solutions as traditional security tools prove inadequate against AI-powered attacks. A Gartner report projects 70% of AI applications will use multi-agent systems — what some are calling “guardian agents” — within 2-3 years.

CIOs are telling me they’re allocating 15-20% of their security budgets specifically for AI threat protection. This isn’t speculative spending; it’s driven by real, immediate concerns about AI-powered attacks that existing security infrastructure simply can’t detect or prevent effectively.

The agentic AI challenge: When defense systems make their own decisions

The most intriguing — and concerning — development is the emergence of agentic AI systems within enterprise security operations. These systems are beginning to make critical security decisions autonomously, which creates both tremendous opportunity and significant risk.

This is a point I brought up in “AI agents were everywhere at RSAC. What’s next?” — organizations need to capitalize on the benefits of automated security incident detection and resolution while addressing fundamental concerns, such as securing the agents themselves, establishing proper identity frameworks and maintaining organizational control.

The CISOs I speak with regularly are grappling with questions like:

  • How do we ensure our AI security agents aren’t compromised?
  • What happens when our defensive AI conflicts with legitimate business operations?
  • How can we maintain human oversight without compromising the speed advantages of automated responses?

Practical steps for security leaders

Based on discussions with security executives and observations from our portfolio companies, here are the immediate priorities:

  • Implement AI-DR capabilities now. Don’t wait for perfect solutions. Early AI detection and response tools are already proving effective against AI-powered attacks. The technology will improve, but basic protection is available today.
  • Establish AI agent governance. Create clear policies for how AI systems can act autonomously within your security operations. This includes kill switches, escalation protocols and regular audits of AI decision-making.
  • Zero trust for AI systems Apply zero-trust principles not just to users and devices, but to AI agents themselves. Every AI system should be continuously verified and have limited, specific permissions.
  • Vendor risk assessment 2.0 Traditional vendor assessments don’t account for AI-powered attacks. Update your evaluation criteria to include how vendors protect against and detect AI-generated threats.

Looking ahead: The next 18 months

The enterprise reality is apparent: AI-powered cybersecurity is no longer a future concern — it’s a present-day challenge that requires an immediate operational response. Organizations that move quickly to implement AI-DR capabilities and establish proper governance around agentic AI systems will have a significant defensive advantage.

While the cybersecurity landscape is evolving faster than ever, so are the tools and strategies to defend against emerging threats. For CIOs and security leaders, the key is balancing innovation with prudent risk management — embracing AI’s defensive capabilities while staying ahead of its potential for offense.

Success in this environment requires not only new technology but also new operational frameworks that can keep pace with AI-driven threats while maintaining the control and oversight that enterprise operations demand.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Rick Grinnell

Rick Grinnell is Founder and Managing Partner of Glasswing Ventures, focusing on investments in AI-enabled security and enterprise infrastructure. As an experienced venture capitalist and operator, Rick has invested in some of the most dynamic companies in security, storage, analytics and SaaS applications during his 24-year tenure. Rick also has deep operating experience, having held senior marketing and engineering roles at Adero (acquired by Inktomi), ClearOne Communications (acquired by Gentner Communications, later renamed ClearOne), and PictureTel (acquired by Polycom). Rick is a member of the Educational Council at the Massachusetts Institute of Technology (MIT), a Venture Capital Advisor at the Rock Center for Entrepreneurship at Harvard Business School (HBS), a Board Member of the MIT Sandbox, an MIT iHQ Mentor and a frequent judge at MassChallenge. His contributions to the broader community include being on the Board of the Advanced Cyber Security Center, New England’s public/private security collaboration, the Board of Advisors at the Museum of Science in Boston, and a former member of the Board of Directors at Big Brothers Big Sisters of Eastern Massachusetts, where he is still active.

More from this author