12 minute read
You hire smart people for their judgment. Then you build systems that filter out that judgment before it reaches decisions. This isn't a failure of individuals. It's a structural pattern that research shows operates in nearly every organization. The people closest to problems have the best information. They're also furthest from decision-makers. Every layer between them is a potential filter.
The result: organizations that pay for expertise but systematically discard it when that expertise conflicts with plans, preferences, or momentum.
The most dramatic demonstration of this pattern happened 39 years ago this month. It killed seven people. Seventeen years later, the same organization repeated the same mistake. Seven more died.
The technical failures were different. The organizational failure was identical.
January 27, 1986
Engineer Roger Boisjoly joined a teleconference arguing against the next morning's shuttle launch. Florida's launchpad had dropped below freezing for the first time in shuttle history. Boisjoly had data showing the O-ring seals failed in cold weather. Six months earlier, he'd put it in writing: "The result could be a catastrophe of the highest order; loss of human life."
NASA officials were appalled at the no-launch recommendation.
After a recess, a Thiokol senior vice president turned to his engineering director with a sentence that would become one of the most studied moments in organizational failure:
The vote flipped. Launch approved. Seventy-three seconds later, seven astronauts were dead.
Boisjoly had the data. He had the expertise. He had the courage to fight for it in the room.
The system filtered him out anyway.
The Comfortable Explanation
The standard interpretation focuses on individual failures. Bad judgment. Reckless managers. People who should have known better.
This framing is comforting because it contains the problem. Fire the bad actors. Fix the protocols. Move on.
Sociologist Diane Vaughan wasn't satisfied. She spent nine years investigating, reviewing over 122,000 pages of NASA documents. Her conclusion overturned everything people thought they understood about Challenger.
The decision to launch wasn't a deviation from NASA's rules. It was made in complete accordance with them.
No one broke the system. The system itself produced the disaster.
Vaughan identified a pattern she called "normalization of deviance." Each shuttle launch with O-ring damage that didn't end in catastrophe quietly expanded what NASA considered acceptable risk. Success became evidence that warnings could be ignored.
The burden of proof gradually inverted. Originally: "Prove it's safe to launch." Over time: "Prove it will fail."
Boisjoly could show O-rings might fail in cold weather. He couldn't prove they would definitely fail on this specific launch. The launch plan only needed to argue success was possible.
Concerns needed certainty. Plans needed plausibility.
In 2003, Space Shuttle Columbia broke apart during reentry. Different technical cause. Engineers had raised concerns about foam strikes. Those concerns traveled up the chain, got softened at each layer, and arrived at decision-makers as minor issues not worth delaying the mission.
NASA brought Vaughan back to consult. Seventeen years later, the same pattern had reasserted itself.
Why This Isn't About NASA
It would be convenient to treat this as an aerospace problem. Unique pressures. Unusual complexity. Lessons for rocket scientists, not the rest of us.
Research shows the pattern is universal.
Elizabeth Morrison and Frances Milliken studied why employees withhold information that could help their organizations. They found three beliefs that drive silence: speaking up won't change anything because leadership already decided, raising concerns risks being labeled a troublemaker, and challenging the plan might damage relationships with people who control your career.
These beliefs don't require a toxic culture. They emerge naturally in any organization where information must travel upward through hierarchy to reach decisions.
Boisjoly didn't have these hesitations. He fought in the room. He put his concerns in writing. He did everything organizational theorists say employees should do.
The system still filtered him out. Which tells you individual courage isn't the bottleneck.
Amy Edmondson's research on psychological safety illuminates why. Studying hospital units, she discovered that better-performing teams reported more errors, not fewer. The difference wasn't error rate. It was willingness to surface problems.
In most organizations, the problems leadership sees are a filtered subset of the problems that actually exist. Your most experienced people have stopped mentioning things they've learned won't get heard.
How Filtering Works
Three mechanisms operate in nearly every organization. All three were visible at NASA. All three are probably visible in yours.
Hierarchy Softens the Signal
Boisjoly's data didn't go directly to NASA decision-makers. It traveled through Thiokol management first—people with contracts to protect, relationships to maintain. By the time it arrived, urgency had evaporated. The engineer's "this will fail" becomes the manager's "there are some concerns" becomes the director's "the team has questions but nothing blocking." Each layer files down the edge.
Success Normalizes Risk
The first O-ring damage triggered concern. By the tenth successful landing with damage, it was just how things worked. Vaughan watched NASA reinterpret each close call as evidence the system was robust rather than lucky. Success teaches the wrong lesson: that the people raising concerns are being overly cautious.
Burden of Proof Protects the Plan
Boisjoly could show O-rings might fail. He couldn't prove they would definitely fail on this specific launch. That gap was enough. Concerns must prove catastrophe is certain. The plan only needs to show success is possible.
A 2025 longitudinal study tracking workers over ten weeks found the same dynamic at individual level. Silence in one situation predicted more silence later—not because people became more risk-averse, but because they learned which concerns the system would absorb and which it would reject. When warnings go unheard, people stop giving them. The organization then develops a climate where speaking up feels futile. The climate shapes future behavior in a feedback loop.
This asymmetry appears everywhere. Product teams ship despite engineering warnings because "we can't prove it'll break." Strategies launch despite analyst concerns because "the model might be wrong." Decisions proceed despite expert objections because certainty is impossible and plausibility is easy.
Irving Janis documented this pattern in policy disasters from the Bay of Pigs to Vietnam. Groups under pressure develop shared rationalizations protecting the preferred course. Dissenting voices learn the implicit rules about which concerns are welcome and which make you difficult.
The Pattern Across Contexts
Before Crew Resource Management transformed aviation, junior officers routinely failed to challenge captains making dangerous decisions. In Tenerife in 1977, a first officer noticed the captain beginning takeoff without clearance. He mentioned it once, tentatively. The captain dismissed him. 583 people died.
The first officer had the information. The hierarchy filtered it out.
In healthcare, research consistently finds that nurses notice problems before physicians: medication errors, patient deterioration, procedural mistakes. They frequently don't speak up. The status differential between physician and nurse creates the same filtering dynamic NASA had between management and engineering.
In technology, the pattern takes less dramatic form. Engineers flag technical debt that gets overridden by launch timelines. Designers raise usability concerns dismissed as polish. Analysts present data contradicting strategies leadership already committed to.
No one dies. The pattern is identical.
Organizations hire experts, then build structures that discount expertise when it conflicts with momentum.
What Actually Changes This
Individual courage helps. But Boisjoly had courage. The solution has to be structural.
Separate the channel from the chain. When concerns must travel through the same hierarchy that owns the plan, filtering is inevitable. Create paths where expertise can reach decisions without passing through people invested in a particular outcome.
Aviation learned this through disasters. Anonymous safety reporting systems bypass normal hierarchy entirely. Crew Resource Management gives junior officers explicit language and authority to challenge captains. The first officer in Tenerife had information but no legitimate path to be heard. Modern cockpits are designed so that path exists.
Invert the burden of proof. Before major decisions, explicitly ask: what would we need to see to stop this? If the honest answer is "proof of certain failure," acknowledge you're accepting unknown risk. That might be appropriate. It should be conscious.
Some organizations use pre-mortems: imagine the decision already failed, work backward to identify why. This makes failure imaginable rather than requiring someone to prove it will definitely happen.
Track what warnings predict. Most organizations never close the loop between concerns raised and outcomes observed. They don't know which sources of warning are actually predictive.
Start tracking. When someone raises a concern that gets overridden, note it. Check back in six months. Build an evidence base for which warnings to weight heavily. This also gives the people raising concerns credibility when they're right.
Make near-misses visible. Organizations investigate disasters. Fewer investigate close calls. But close calls are where the pattern becomes visible before it becomes catastrophic.
Every successful launch with O-ring damage was a near-miss that NASA reinterpreted as acceptable risk. Systematic near-miss analysis might have caught the drift before Challenger.
A 2022 study of 600 nurses found that most couldn't even recognize near-misses when they occurred. Only 4% consistently reported them. When researchers investigated why, they found the learning stopped at the group level. Individual nurses would notice problems, but the information never integrated into organizational knowledge. The near-misses stayed invisible to the people who could act on them.
What did you almost get wrong? What would have happened if conditions were slightly different? Treat near-misses as data about your filtering systems, not as relief that nothing bad happened.
Audit your actual architecture. When did someone last raise a concern that changed a decision in your organization? Not a concern that got heard politely and filed away. A concern that actually altered course. How did it travel from the person who held it to the decision that changed? How many layers did it cross?
If you can't identify an example, that tells you something. If the example required unusual courage, unusual access, or unusual circumstances, that tells you something too. Systems that only work when someone is exceptionally brave aren't systems.
Now think about a concern that should have mattered but didn't. Where did it get filtered? What would have needed to be different? The gap between those two answers reveals your organization's actual architecture for hearing expertise.
The Pattern Across Four Weeks
Over the past several weeks, we've examined how organizations fail at decisions that depend on hearing truth.
Judgment doesn't self-correct. The Oakland scouts had decades of experience and complete confidence. Their predictions didn't correlate with wins. Intuition without feedback just grows more certain, not more accurate.
Systems drift without anyone noticing. Netflix built hiring practices that transformed their company, then watched those practices evolve under pressure until they looked nothing like the original. No one decided to abandon the approach. It just happened, one reasonable accommodation at a time.
And hierarchy filters expertise. You hire people specifically because they know things you don't. Then you build structures that discount their knowledge when it conflicts with plans, preferences, or momentum.
The thread connecting these: the solution isn't better individuals. It's better systems for hearing truth.
Better feedback loops so judgment can actually improve. Visible tracking so drift can't hide. Channels that let expertise reach decisions with weight.
The engineers at NASA weren't the problem. The scouts in Oakland weren't stupid. The hiring managers at Netflix weren't lazy.
The systems they operated in filtered out exactly the information those systems needed to hear.
It's January. Organizations everywhere are setting targets, building plans, creating timelines that will require everything to go right.
Somewhere in your organization, someone has information suggesting a plan is riskier than it appears. They might raise it. It might get filtered before it reaches anyone who can act.
The question isn't whether that expertise exists. The question is whether it has a path to be heard.
Challenger is what happens when the answer is no.
Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350-383.
Feng, T., Zhang, X., Tan, L., Su, Y., & Liu, H. (2022). Near-miss organizational learning in nursing within a tertiary hospital: A mixed methods study. BMC Nursing, 21(1), 315.
Janis, I. L. (1982). Groupthink: Psychological studies of policy decisions and fiascoes. Houghton Mifflin.
Morrison, E. W., & Milliken, F. J. (2000). Organizational silence: A barrier to change and development in a pluralistic world. Academy of Management Review, 25(4), 706-725.
Parker, S. K., Holman, D., & Johns, G. (2025). A 10-week longitudinal study of voice and silence: Revealing the energy and social dynamics of speaking up and staying silent. Journal of Occupational and Organizational Psychology.
Vaughan, D. (1996). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. University of Chicago Press.
%20(1)%20(1).webp)
0 Comments