Fighting the Last War#
I’ll never forget the first time I cracked open a threat assessment briefing as a fresh agent on the Presidential Protective Division. I expected cutting-edge intelligence — the kind of stuff you see in the movies, analysts tracking threats in real time across banks of screens, getting ahead of attacks before they materialize. What I got was a binder. A fat, dog-eared binder packed with historical incident reports, behavioral profiles drawn from past cases, and response protocols built on what had already gone wrong. Not what might go wrong next. What had gone wrong.
Don’t misread me — the people who assembled those briefings were sharp. Dedicated. Professionals to the marrow. But the system they operated inside had a fundamental flaw hardwired into its architecture, and it’s the same flaw that haunts every defensive organization on the planet: the threat assessment apparatus was a rearview mirror. It could tell you with surgical precision what the last attacker looked like, how he approached, what weapons he carried, what warning signs he gave off. It was essentially useless at telling you what the next attacker would look like — because the next attacker, by definition, would be doing something nobody had seen before.
This is the Immunity Lag Law, and it’s as close to an iron rule as security has: the defender is always fighting the last war.
Let me borrow an analogy from immunology, because it maps almost perfectly onto what I lived through in the Service.
Your body’s immune system works by recognizing pathogens it’s encountered before. A flu shot gives your immune system a preview — a weakened version of the virus — and it manufactures antibodies tailored to that exact strain. Elegant system. One catch: the flu virus mutates every year. By the time your immune system has tooled up for last year’s strain, this year’s version has already evolved past it. Your immune system isn’t stupid. It’s just structurally late.
That’s precisely how threat assessment works at the Secret Service — and at the FBI, DHS, CIA, and every other agency in the alphabet soup of American security. We build our models on data. Data comes from past incidents. Past incidents tell us what attackers did. But the attacker who succeeds tomorrow will do something different. That’s the whole point. If they replicated the playbook, we’d catch them.
The lag isn’t a glitch. It’s baked in. An unavoidable consequence of how defensive systems are constructed. You can’t model what hasn’t happened yet. You can only gear up for what you’ve already seen. And the gap between “what we’ve prepared for” and “what’s actually coming” — that gap is where people die.
I watched this dynamic repeat throughout my years in the Service. After JFK was killed, security protocols around the presidential motorcade were torn apart and rebuilt from scratch. Open-top cars vanished. Routes were hardened. Counter-sniper teams were deployed. The response was massive, professional, thorough. And it was engineered to prevent one specific scenario: another Lee Harvey Oswald with a rifle in a window.
The system got very good at stopping that. But here’s the question nobody asked loudly enough: what about the threat that doesn’t look like Oswald? The one that doesn’t use a rifle? The one that doesn’t come from a building? The protocols were built for a specific pathogen. Brilliant at recognizing and neutralizing that pathogen. Blind to every pathogen that hadn’t been catalogued yet.
This isn’t unique to the Secret Service. It’s the structural DNA of every threat assessment system ever built. Three root causes drive it, and they compound each other in ways that make the lag nearly impossible to close.
The Data Dependency Trap. Threat models are built on historical case databases. The richer the database, the more confident the analysts feel. But confidence built on historical data is a trap — because the truly lethal attack is precisely the one that isn’t in your database. The more you lean on past patterns, the blinder you become to novel ones. Your database turns into a comfort blanket that feels like body armor. CNN reporting on the 2026 White House Correspondents’ Dinner breach laid this bare: the Secret Service had screened every hotel guest against databases of known threats, wanted persons, and flagged individuals. The suspect — carrying a shotgun, a handgun, and knives — appeared nowhere in any of them. No criminal record. No prior flags. A clean skin, invisible to every algorithmic tripwire the system had built. The databases worked perfectly. They just didn’t contain the right name.
The Modeling Time Gap. Even when a new threat is identified, the chain from detection to deployed defense is agonizingly slow. New threat emerges → intelligence community flags it → analysts study it → models get updated → protocols are rewritten → field agents are retrained → deployment is complete. Every link burns time. And while you’re grinding through that chain, the attacker has already moved on to the next innovation. The antibody is always being manufactured for a virus that’s already mutated.
Confirmation Bias Amplification. This one scared me more than the others. Threat assessment teams are staffed by smart, experienced people. But smart, experienced people are also the most susceptible to confirmation bias — the pull toward looking for threats inside frameworks they already understand. When an anomalous signal comes in that doesn’t match any known pattern, the instinct isn’t to sound the alarm. The instinct is to file it as noise. Not because they’re lazy. Because the existing models say it’s noise. The immune system doesn’t fail to see the new pathogen — it sees it, classifies it as a normal cell, and waves it through.
Stack these three together and you get a loop that’s almost impossible to crack:
New technology hits the market. Attackers adopt it first — because attackers are agile, unencumbered by procurement cycles, and incentivized to innovate. The defensive system can’t recognize the new approach because it doesn’t match any historical pattern. Attack succeeds. Catastrophe. The defensive system scrambles, builds new protocols tailored to counter the attack that just happened. Brief window of effectiveness. Then the attackers evolve again. New lag. New blind spot. Repeat.
I watched this cycle play out with the White House drone incident. A small commercial drone crash-landed on the White House lawn, and suddenly everyone was asking: how do we defend against drones? Fair question. But we were already behind. Drone technology had been commercially available for years before anyone in the security establishment assembled a serious counter-drone protocol. We weren’t getting ahead of anything. We were reacting — reacting to a demonstrated vulnerability rather than anticipating it.
And the next threat won’t be drones. It’ll be something we haven’t imagined yet. Something that doesn’t appear in any briefing binder. Something that will make us feel, once again, like we were fighting the last war while the enemy was already fighting the next one.
So if perfect threat assessment is impossible — and I’m telling you flat out it is — what do we do? Throw up our hands? Accept that the bad guys will always be a step ahead?
No. But we need to fundamentally rethink what “security” actually means.
The traditional approach is what I call the prediction model: try to catalog every possible threat, build a bespoke defense for each one, and pray you haven’t missed any. It’s seductive because it feels scientific. Rigorous. It fills briefing binders with impressive-looking charts and threat matrices. And it’s fundamentally broken because it assumes you can predict the unpredictable.
The alternative is what the Institutional Immune System framework calls the resilience model: stop trying to predict every attack and start building systems that can take a hit and keep functioning. Not “we will prevent every breach” but “when we get breached — and we will — we will contain the damage, respond fast, and recover faster.”
This is a psychologically brutal shift for security professionals. I know because I lived it. Your whole identity is built around preventing bad things. Admitting you can’t prevent everything feels like admitting failure. But it’s not failure — it’s honesty. And honesty is the foundation of every defense that actually holds.
In practical terms, resilience means investing as heavily in response as you do in prevention. It means running drills not just for threats you’ve seen but for ones you haven’t — scenarios where the briefing binder is worthless and your people have to think on their feet. It means building redundancy into every layer of defense so that when one layer fails — and it will — the next layer picks it up. It means cultivating a culture where “I don’t know what this is, but something feels off” is treated as actionable intelligence, not paranoia.
I left the Service with deep respect for the men and women who do threat assessment work. They’re among the most dedicated professionals I’ve ever stood alongside. But I also left with a deep frustration at the system that boxes them in — a system that rewards prediction over preparation, confidence over humility, and historical modeling over creative imagination.
The next attack on the President — or on any high-value target — won’t look like the last one. It never does. The immune system that guards our most important institutions will always be a step behind the pathogen. That’s not a criticism. That’s physics. The question isn’t whether we can eliminate the lag. We can’t. The question is whether we’re honest enough to admit it — and brave enough to build our defenses around that truth instead of pretending it away.
The rearview mirror is a useful tool. But if it’s the only window you’re looking through, you’re going to crash.