Discussion about this post

User's avatar
Oleg's avatar

Hi Rob, a co-author of "The Promptware Kill Chain" here (https://arxiv.org/abs/2601.09625).

This is a great post, thank you.

A few points to consider:

1. in 2025, Palo Unit 42 researchers simulated a ransomware attack integrating GenAI at each stage of the attack: "Our testing took the time to exfiltration from the median of two days down to 25 minutes - about 100 times faster" (https://www.paloaltonetworks.com/resources/research/unit-42-incident-response-report-2025).

2. Much of our defense rests on "prevention will eventually fail", making rapid detection the primary defensive strategy. That's a losing position long-term if attackers operate at machine speed - we're reduced to a race of who out-automates whom (As Chuvakin put it in a different context). There are some other strategies in the literature, but nothing yet mainstream.

3. That said, not everything scales with AI. Some attack stages are constrained by physical reality. Exfiltration, for instance, is bounded by the victim's available bandwidth - no amount of AI changes that. What I haven't seen yet is a systematic breakdown of which attack stages are acceleratable by AI and which have hard physical ceilings. That analysis would be worth having.

4. I'm also missing documented real-world incidents where AI involvement was inferred from the speed of the threat actor's operation. Would love to see some evidence here.

No posts

Ready for more?