Boardrooms cheered record AI rollouts while basic safeguards frayed, and attackers quietly slipped through reopened cracks. The tension between speed and security was no longer theoretical; it was surfacing in real incidents where sanctioned AI projects stumbled on fundamentals long considered solved.
Why This Matters
Enterprises adopted AI at a pace that outstripped guardrails, and the cost showed up in exposures that should have been preventable. The headline risk sat not only in new attack classes but in old habits returning: weak encryption, loose access, and brittle segmentation. As one executive put it, “Novel threats grab airtime, but it is still hygiene that determines resilience.”
Mandiant’s fieldwork underlined the gap. In sanctioned environments—far from “shadow AI”—red teams bypassed controls by nudging classifications, hijacking unprotected browser-to-AI links, and turning a victim’s own assistants into tireless accomplices after a single social-engineering win. The lesson landed hard: adoption without rigor resets the clock on security maturity.
Inside the Rush
On stage at Google Cloud Next, Mandiant Consulting VP Jurgen Kutscher issued the warning plainly. “The rush to deploy is reviving failures many thought were behind them,” he said, noting that teams, dazzled by capability gains, deferred the checkpoints that once grounded every new system. That deferral gave attackers both time and surface area.
Pressure from the business compounded the problem. Project leads accelerated proofs of value into production, assuming AI-specific tools would offset traditional risk. Instead, controls fragmented across teams, and CISOs often received visibility late—after integrations sprawled across identity, data, and model tiers.
Where Systems Break
The tactics were simple but devastating. By manipulating data labels, attackers walked sensitive information past DLP as if on a cleared route. With unencrypted or weakly protected traffic between AI systems and browsers, they intercepted prompts and outputs, enabling session hijacks and stealthy exfiltration.
After a successful lure, the victim’s own AI did the rest—automating policy edits, replicating access changes, and moving data at a pace that outstripped human detection. Meanwhile, rushed delivery pipelines skipped threat modeling and segmentation tests, producing environments that failed under basic red-team pressure.
Expert Signals
Mandiant’s topline finding cut through the noise: encryption, access control, segmentation, and validation still decided outcomes. In exercises, sanctioned workflows failed baseline checks more often than leaders expected, revealing that novelty had crowded out the basics that anchor resilience.
CISOs cited governance and architecture as lagging indicators. Budgets flowed to new capabilities, while hygiene remained underfunded. As one security chief noted, “If controls trail the rollout, the attack path writes itself.” The consensus pointed to a fixable problem—provided ownership, scope, and testing were set early.
Course Correction
A credible path forward started with governance before scale: assign ownership across data, models, platforms, and products, and require risk reviews for every AI workflow. Architecture then needed secure-by-default patterns—encryption in transit and at rest, tier isolation with identity boundaries, and segmentation proven through adversarial path testing.
From there, organizations locked classification policies against tampering, enforced least privilege with step-up verification for sensitive actions, and gated admin changes behind multi-party approval. Continuous red-teaming targeted label tampering, prompt injection, and lateral AI misuse, while monitoring captured prompts, outputs, and cross-system calls with tuned DLP and egress controls. Executives tied rollout speed to control maturity, and the shift from exuberant adoption to disciplined delivery signaled that the AI payoff would endure.

