Why 47? The Math Behind “AI Attacks 47x Faster Than Humans”
(Spoiler: the real numbers are worse.)
People keep asking me to source the “47 times faster” number I’ve been citing in talks and interviews. Fair question.
I’ve thrown it around in media hits, keynotes, and SANS material for the better part of a year, and the honest answer is that it came from a napkin-math exercise that I deliberately made conservative enough that nobody could credibly argue the other direction. (Spoiler: the real numbers are worse.)
So here’s the full breakdown, rough math and all.
The Inputs
I spent the past year collecting speed measurements from actual research and operational testing. Not vendor marketing decks, not projections, not what some analyst thinks might happen in 2028. Measured times from systems running real exploit chains against real network configurations.
Three data points anchored the math:
MIT’s ALFA-Chains research
ALFA-Chains1 uses AI planning to discover and chain privilege escalation and remote exploits across networks. The paper tested against networks ranging from 3 to 200 hosts using 1,880 Metasploit exploits and 1,903 exploits from the Core Certified Exploit Library. On a 20-host network vulnerable to 83 exploits, ALFA-Chains found an exploit chain in 0.01 seconds. On a 200-host network with 114 exploits, it found 13 exploit chains in 26.25 seconds. The paper’s own summary: discovering multiple exploit chains across networks of up to 200 hosts “in under 30 seconds.” (I doubled that to 60 seconds for my rough math, because I wanted cushion. More on that in a minute.)
A separate line of research backs this up from a different angle: benchmarking LLM agents against CTF-style challenges2 shows AI agents completing complex security tasks at speeds that consistently outpace human benchmarks, providing additional qualitative support for the speed differential even in adversarial problem-solving contexts.
Horizon3’s NodeZero testing
Their autonomous pentesting platform achieved full privilege escalation in approximately 60 seconds in documented cases. Their blog puts it directly: in one case, NodeZero achieved exploitation within 60 seconds of discovering a vulnerable system.3 (When your autonomous pentest tool is owning boxes in under a minute, and your defenders are still arguing about change management tickets, you have a timing problem.)
CrowdStrike’s 2023 Threat Hunting Report
Average time from initial compromise to lateral movement (which typically requires privilege escalation) hit a record low of 79 minutes. CrowdStrike also reported fastest observed breakout times around 7 minutes.4 The broader industry range was roughly 48 to 120 minutes depending on which report and which percentile you grab.
The Math (Such As It Is)
I want to show the work here, because I think the methodology matters more than the number itself.
I took the “under 30 seconds” from ALFA-Chains and doubled it to 60 seconds as my AI benchmark. Not because the research said 60 seconds, but because I wanted a number no one could accuse me of cherry-picking downward. (I also wanted to account for the fact that ALFA-Chains is operating in a modeled environment, and real-world conditions add friction. Though honestly, Horizon3 hitting 60 seconds in operational testing suggests my doubling might have been unnecessary.)
Then I ran comparisons against human benchmarks from the CrowdStrike data:
If you use 79 minutes as the human benchmark (CrowdStrike’s reported average), 79 minutes divided by 30 seconds gives you roughly 158x faster. Even against my padded 60-second AI benchmark, it’s about 79x.
If you use 60 minutes (a round number in the range CrowdStrike reported, since they often described 1-2 hours), you get 120x against 30 seconds.
If you use 48 minutes (a breakout time frequently cited across multiple threat reports), you get 96x against 30 seconds. I halved that to get 48.
But nobody likes even numbers. (I don’t know why---something about odd numbers feels more honest, less manufactured.) So I pulled an Alan Turing and dropped it by one. 475.
(Yes, that’s the actual reason it’s 47 and not 48. I wish I had a more rigorous justification. I don’t.)
Why Conservative Was the Point
I could have gone with 120x or 158x. The math, the research, and Horizon3’s operational data all support numbers in that range. But presenting speed differentials to rooms full of security leaders and policymakers means dealing with a specific cognitive problem: if the number triggers their “that can’t be right” reflex, they stop listening.
I needed a number that was large enough to communicate the severity of the speed gap, but small enough that a skeptical CISO wouldn’t dismiss it before engaging with the implications. 47x lands in the range where most experienced security professionals pause, nod slowly, and say “yeah, that tracks.” (158x makes people think you’re selling something. 47x makes people think you did homework.)
The other reason for conservative: all of these AI measurements come from publicly available tools and frameworks. ALFA-Chains uses Metasploit. Horizon3’s NodeZero operates within commercial pentesting constraints. These are not weaponized APT capabilities. Nation-state actors running custom exploit chains with purpose-built AI agents are almost certainly moving faster than anything measured in published research. (Anthropic’s disclosure about CCP-sponsored actors using Claude Code for autonomous reconnaissance gives you a hint about where this is actually going. Those actors weren’t constrained to Metasploit modules.)
47x represents what publicly available tools achieve today against average human response times from 2023. APT-level capabilities against 2023 numbers would be worse. APT-level capabilities against 2025 human response times (which, by the way, haven’t gotten meaningfully faster) would be worse still.
What the Number Actually Tells You
Saying “30 seconds” or “60 seconds” without a comparison point means nothing to most people. My daughter can microwave a burrito in 60 seconds. (She’s very efficient.) The number only becomes useful when you put it against the human side of the ledger.
A decade ago, the advanced persistent threats I helped investigate took three to six months walking through the kill chain from initial compromise to operational goals. By 2023, CrowdStrike was measuring that in minutes. Now, with AI-augmented exploit discovery and chaining, the operational steps that used to define the “breakout window” are collapsing to seconds.
The 47x figure gives people a ratio they can actually use in boardroom conversations and budget justifications. It’s specific enough to cite and conservative enough to defend. (It’s also, frankly, the kind of number that will look quaint in about 12 months when the next generation of autonomous agents hits. I fully expect someone to pull this post up in 2027 and ask why I was being so optimistic.)
The Caveats I Want on the Record
This wasn’t an academic paper. It was rough math built from real data to give an illustrative metric for talks and interviews. The specific inputs: ALFA-Chains timing data from MIT, Horizon3’s operational testing, CrowdStrike’s threat hunting benchmarks. The methodology: compare AI exploit chain speeds against human breakout times, apply aggressive conservatism at every step, arrive at a defensible number.
The CrowdStrike data I used is from 2023 reports. I deliberately grabbed pre-2025 human benchmarks because I’m guessing (and this is explicitly a guess, not a measured claim) that 2025 attack data will show increasing AI assistance compressing human attacker timelines too. Using 2023 numbers keeps the comparison cleaner, even if it makes the 47x even more conservative against current reality.
I also want to note the qualitative validation: MIT’s autonomous agents have demonstrated the ability to perform complex hacks in seconds or minutes that take humans hours. Multiple independent research efforts point in the same direction. No credible data source I’ve found suggests the speed differential is smaller than what 47x implies. Every new measurement I’ve seen suggests it’s larger.
The Uncomfortable Part
47x is the number I’m comfortable defending in public because it’s built on published research and deliberately conservative assumptions. The number I think about at 2 AM is quite a bit higher. (sighhhhh)
When autonomous AI agents move from chaining known Metasploit exploits to chaining zero-days discovered by other AI systems, the speed comparison against human operators becomes less a ratio and more a category difference. Like comparing a bicycle to a jet and noting the jet is “faster.” Technically accurate, functionally useless as a comparison.
But we’re not there yet. For now, 47x gives defenders and decision-makers a concrete, defensible benchmark for how fast the speed gap is already moving. Cite it in your next board briefing, use it in your budget justification, and understand that the real number is probably worse.
(And if you’re reading this in 2027 and laughing at how slow 47x sounds, yeah. I saw that coming too.)
Rob T. Lee is Chief AI Officer & Chief of Research, SANS Institute
Tulla, M. et al. “ALFA-Chains: AI-Supported Discovery of Privilege Escalation and Remote Exploit Chains.” MIT/University of Naples Federico II, April 2025. arXiv:2504.07287v1.
LLM agent vs. CTF benchmarking: arxiv.org/html/2504.06017v1
Horizon3.ai. “Vulnerable vs. Exploitable: Why Understanding the Difference Matters to Your Security Posture.”
CrowdStrike. “2024 Threat Hunting Report” (covering 2023 observed data). Average breakout time: 79 minutes. Fastest observed: ~7 minutes.
SANS “Own AI Securely” materials, 2025. How We Arrived at “47x Faster” methodology documentation.


Hi Rob, a co-author of "The Promptware Kill Chain" here (https://arxiv.org/abs/2601.09625).
This is a great post, thank you.
A few points to consider:
1. in 2025, Palo Unit 42 researchers simulated a ransomware attack integrating GenAI at each stage of the attack: "Our testing took the time to exfiltration from the median of two days down to 25 minutes - about 100 times faster" (https://www.paloaltonetworks.com/resources/research/unit-42-incident-response-report-2025).
2. Much of our defense rests on "prevention will eventually fail", making rapid detection the primary defensive strategy. That's a losing position long-term if attackers operate at machine speed - we're reduced to a race of who out-automates whom (As Chuvakin put it in a different context). There are some other strategies in the literature, but nothing yet mainstream.
3. That said, not everything scales with AI. Some attack stages are constrained by physical reality. Exfiltration, for instance, is bounded by the victim's available bandwidth - no amount of AI changes that. What I haven't seen yet is a systematic breakdown of which attack stages are acceleratable by AI and which have hard physical ceilings. That analysis would be worth having.
4. I'm also missing documented real-world incidents where AI involvement was inferred from the speed of the threat actor's operation. Would love to see some evidence here.