Enterprise security teams are losing ground to AI-enabled attacks — not because defenses are weak, but because the threat model has shifted. As AI agents move into production, attackers are exploiting runtime weaknesses where breakout times are measured in seconds, patch windows in hours, and traditional security has little visibility or control.
CrowdStrike's 2025 Global Threat Report documents breakout times as fast as 51 seconds. Attackers are moving from initial access to lateral movement before most security teams get their first alert. The same report found 79% of detections were malware-free, with adversaries using hands-on keyboard techniques that bypass traditional endpoint defenses entirely.
CISOs’ latest challenge is not getting reverse-engineered in 72 hours
Mike Riemer, field CISO at Ivanti, has watched AI collapse the window between patch release and weaponization.
"Threat actors are reverse engineering patches within 72 hours," Riemer told VentureBeat. "If a customer doesn't patch within 72 hours of release, they're open to exploit. The speed has been enhanced greatly by AI."
Most enterprises take weeks or months to manually patch, with firefighting and other urgent priorities often taking precedence.
Why traditional security is failing at runtime
An SQL injection typically has a recognizable signature. Security teams are improving their tradecraft, and many are blocking them with near-zero false positives. But "ignore previous instructions" carries payload potential equivalent to a buffer overflow while sharing nothing with known malware. The attack is semantic, not syntactic. Prompt injections are taking adversarial tradecraft and weaponized AI to a new level of threat through semantics that cloak injection attempts.
Gartner's research puts it bluntly: "Businesses will embrace generative AI, regardless of security." The firm found 89% of business technologists would bypass cybersecurity guidance to meet a business objective. Shadow AI isn't a risk — it's a certainty.
"Threat actors using AI as an attack vector has been accelerated, and they are so far in front of us as defenders," Riemer told VentureBeat. "We need to get on a bandwagon as defenders to start utilizing AI; not just in deepfake detection, but in identity management. How can I use AI to determine if what's coming at me is real?"
Carter Rees, VP of AI at


2 days ago
75






English (US)