The Email Server: A Case Study in Institutional Failure#

Let me set the scene. Not the political theater—you’ve seen enough of that. Not the cable news shouting matches or the congressional hearing clips or the campaign trail sound bites. I’m talking about the scene behind the scene. The one nobody covered because there was no camera crew.

I was in federal law enforcement circles when the email server story cracked wide open. I was sitting in a room with people who’d spent their entire careers handling classified information—people who could recite security protocols in their sleep, who’d personally watched colleagues get investigated, disciplined, and prosecuted for violating those protocols. People who understood, deep in their bones, what the rules were and what happened when you broke them.

The first reaction in that room wasn’t outrage. It wasn’t partisan fury. It wasn’t the theatrical indignation you saw on TV.

It was something much quieter. Much more dangerous.

It was recognition. The slow, nauseating recognition that what the news was describing—a private server, classified material on unsecured systems, destroyed records—was behavior that would’ve ended any of their careers. Behavior that, at their level, would’ve triggered an investigation, a prosecution, and very possibly a prison cell.

And the unspoken thought filling that room—the one I could read on every face even though nobody said it—was devastatingly simple: “If I had done that, I’d be locked up right now.”

That thought. That one simple, factual, undeniable comparison. It did more damage to institutional trust than any single scandal I’ve ever witnessed. Not because the thought was new—everyone already suspected different rules existed for different people. But because the evidence was now so public, so thoroughly documented, so impossible to wave away, that suspicion hardened into certainty.


Before I go further, I need to say something that’s going to frustrate people on both sides. And I don’t care. Because this is bigger than partisan satisfaction.

This chapter is not about one person.

I know that’s hard to accept. I know that in our current political climate, every scandal collapses into a morality play with heroes and villains, and the only question is whose team you’re on. I know half the people reading this want me to condemn a specific individual, and the other half want me to defend them, and both groups are going to be disappointed.

Because the email server incident, understood properly, isn’t a story about individual misconduct. It’s a specimen. A slide under a microscope. A case study in how institutional design fails—how a system’s architecture creates conditions where bad behavior becomes not just possible but rational.

The bacteria on this slide were there before the incident. They’ll be there after it. They’ll be there long after the specific people involved have left public life. Focusing on the person is like treating a fever with an ice pack. You might feel better for an hour. But the infection—the systemic infection that set the stage for the fever—is still raging.

So let’s look at what the slide actually shows. Not the political drama. The institutional mechanics.


The first thing the case exposes is what I call the Incentive Trap—and once you see it, it changes how you think about institutional failure.

Using a private server for government communications is, from a purely individual cost-benefit standpoint, a rational move. I’m not saying that approvingly. I’m saying it analytically. Look at the incentives.

A private server gives you total control over your own communications record. It shields you from Freedom of Information Act requests that might surface embarrassing or politically inconvenient exchanges. It lets you manage your digital footprint without the clunky, slow, restrictive protocols governing official systems. It’s faster, more convenient, more private, and more controllable than the approved channel.

Now look at the other side. What’s the actual cost of non-compliance? On paper, it’s severe—criminal prosecution, career destruction, prison time. But in practice—and this is the critical gap—enforcement was almost entirely voluntary. The system assumed senior officials would self-report personal device use for government business. That they’d voluntarily submit communications for archiving. That they would, out of a sense of duty, choose the harder, less convenient, more exposed path.

The system ran on honor. And honor, without enforcement behind it, is a wish—not a policy.

When the expected cost of following the rules (inconvenience, exposure, loss of control) consistently exceeds the expected cost of breaking them (which approaches zero when enforcement is honor-based), rational actors break the rules. Every time. Not because they’re evil. Not because they lack integrity. Because they’re human, and humans respond to incentive structures, not to words on a page that nobody backs up.

That’s the Incentive Trap: a system designed around voluntary compliance, with no mechanism to make non-compliance costly enough to prevent it. It’s like building a bank vault with a sign that reads “Please don’t rob this bank” instead of a lock. The sign expresses a perfectly valid policy. It just doesn’t work.


The second thing the case reveals is the selective enforcement problem—and this connects directly to the double standard I laid out in the previous chapter.

Let me strip the timeline down to essentials, without the political noise.

A senior government official used a private server for official business, including communications containing classified information at multiple levels. An FBI investigation confirmed classified material on the unsecured server. The FBI director publicly called the behavior “extremely careless”—a phrase that, where I come from, would be a career death sentence all by itself. Then he recommended no prosecution.

Hold that outcome in your mind while I tell you about people I personally knew.

One agent I worked with was investigated for taking work materials home on an unapproved personal device. No classified information involved. No evidence the material was compromised or seen by anyone unauthorized. A procedural violation, pure and simple. He was suspended. Investigated for months. Forced into administrative limbo that shredded his professional reputation. Eventually pushed into early retirement with a permanent stain on his record.

Another colleague was disciplined for a single email that contained information later reclassified to a higher sensitivity level. He hadn’t mishandled anything at the time he sent it. The classification changed after the fact. But the investigation treated it as a violation, and the consequences were real: frozen promotion, mandatory retraining, a formal reprimand that followed him the rest of his career.

I could give you a dozen more. But the contrast doesn’t need elaboration. It speaks for itself at a volume no commentary could amplify.

Same rules. Same violations—or in some cases, lesser ones. Radically different consequences. The only variable that changed was the name on the file.


But here’s what most people miss about this contrast, and it’s the insight I most want you to grasp.

The primary damage isn’t to the person who escaped consequences. They got lucky—or connected, or shielded. That’s their story.

The primary damage is to every person who watched the double standard play out and drew the obvious conclusion.

If the rules don’t apply equally, why should I sacrifice to follow them? If the system shields the powerful and punishes the ordinary, what exactly am I being loyal to? If my career can be destroyed for a fraction of what someone else walks away from with a public shrug, what’s the rational case for playing by the rules?

These aren’t rhetorical questions. These are the actual, urgent thoughts that ran through the minds of thousands of dedicated public servants who watched this unfold. And every one of those thoughts is a hairline crack in the foundation of institutional trust. Individually, each crack is tiny. Together, they compromise the whole structure.


The third lesson—the most counterintuitive, the one I want you to sit with—is about the difference between punishing the person and fixing the system.

The natural public response to a case like this is visceral and instant: punish the individual. Fire them. Prosecute them. Make an example. Send a message.

I get it. I’ve felt that hot, righteous anger in my own chest—the demand for accountability, the need to see consequences applied.

But here’s the uncomfortable truth: punishing the individual, satisfying as it would be, doesn’t fix the problem. It gives the public a sense of closure—“justice was served”—while leaving the institutional conditions that created the problem completely, totally, untouchably intact.

Think about it. If you punish this person and change nothing else about the system, the next person in that chair faces the exact same incentive structure. The same convenience of private channels. The same toothless enforcement. The same gap between the rules on paper and the rules in practice. The same math that makes non-compliance the rational call.

And being a rational actor in an unchanged system, they’ll make the same calculation. Reach the same conclusion. Do the same thing. Maybe they’ll be a bit more careful about covering tracks, having learned from their predecessor’s exposure. But the behavior will repeat, because the incentives that produced it haven’t changed.

Replacing the person is treating the symptom. Redesigning the incentive structure is treating the disease. One feels satisfying. The other actually works.

This is something the Secret Service taught me about analyzing system failures. When something goes wrong on a protective detail—a breach, a missed threat, a comms breakdown—the first question is never “Who screwed up?” The first question is always: “What about our system allowed this to happen?”

Because if the system allowed it once, it’ll allow it again. Regardless of who’s in the seat. Fire the agent who made the error and replace them with the best you’ve ever trained—if the system has the same hole, the same failure will eventually come back. The person is interchangeable. The system is the constant.


So what does real reform look like? Not the performative kind that makes headlines. The kind that actually prevents the next failure.

It looks like making compliance easier than non-compliance. Designing official communication systems that are genuinely as fast, convenient, and user-friendly as private alternatives—so using the official channel isn’t a sacrifice, it’s the path of least resistance.

It looks like automated enforcement that doesn’t depend on self-reporting. Technical systems that flag unauthorized devices, monitor data flows, detect classified material on unsecured networks—not as surveillance, but as structural safeguards. The same way a bank vault has a lock instead of a sign.

It looks like consequences applied equally, immediately, and visibly—without regard to the political status of the person involved. Not because equal enforcement is fair (though it is), but because it’s the only way to maintain the credibility of the rules themselves.

It looks boring. It looks bureaucratic. It looks like IT upgrades and compliance training and revised protocols and none of the dramatic courtroom moments that make good TV.

But it works. Because it changes the math. It makes the rational choice and the right choice the same choice. And that—reshaping the incentive structure so that doing the right thing is also doing the easy thing—is the only reliable way to get consistent behavior from human beings operating inside complex institutions.


I’ll leave you with a question, and I want you to take it personally. Not as a political exercise. As a mirror.

How many email servers are running in your world right now?

Not literal email servers. The metaphorical kind. The workarounds everyone in your organization knows about but nobody addresses. The informal systems that exist because the formal ones are too slow, too clunky, too inconvenient to actually use. The rules that are technically on the books but practically ignored by everyone—until someone inconvenient gets caught, at which point they’re suddenly, selectively, theatrically enforced.

Every organization has them. Every institution. Every family. And every one of them is a ticking clock, counting down to the next scandal that everyone will treat as a surprise even though the conditions that made it inevitable were visible for years.

The fix isn’t finding the person who set up the server. That’s the easy part—and the useless part.

The fix is building a system where setting up the server doesn’t make sense. Where compliance is easier than non-compliance. Where rules are enforced by architecture, not by honor. Where doing the right thing doesn’t require heroic self-sacrifice—it just requires following the path of least resistance, because the system was designed to make the right path the easy path.

That’s the fight worth having. Not the fight against individuals. The fight against structures that make bad behavior the rational choice.

Fix the structure, and the behavior fixes itself.