Ben's Book of the Month: Challenger: A True Story of Heroism and Disaster on the Edge of Space


Posted on by Ben Rothke

On January 28, 1986, the Space Shuttle Challenger exploded, killing all seven crew members aboard. But what does a space accident from 38 years ago have to do with information security in 2024? As it turns out, a lot. 

In 2014, General Michael Hayden, former Director of the National Security Agency, said that “we kill people based on metadata.” In Challenger: A True Story of Heroism and Disaster on the Edge of Space (Simon & Schuster), author Adam Higginbotham shows that while the Challenger disaster has nothing to do directly with information security, the metadata and lessons around it certainly do. 

Higginbotham has written a fascinating and engaging account of what led to the Space Shuttle Challenger disaster. In a nutshell, the decision to launch was made under significant pressure from NASA, and many people argued that it should not have occurred. But it was not just that, that led to the disaster.

Engineers from O-ring manufacturer Morton Thiokol articulately shared their concerns about the effect of low temperatures on the resilience of the O-rings, a critical safety component of the shuttle. Morton Thiokol engineers said they did not have enough data to determine if the O-rings would be effective. The Morton Thiokol vice president of engineering also recommended against launching. However, due to external and incessant pressures from NASA, their voices weren’t heard, with devastating consequences. 

Anyone who has worked in information security for more than a few months can attest to the many pressures security teams and professionals have to deal with. It’s no joke the observation that CSO really stands for Chief “Scapegoat” Officer. 

Dr. Eugene Spafford, known as Spaf, is a distinguished professor of computer science at Purdue University. His first principle of security administration is that “If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong.”

The same pressure and risks that the space shuttle faced are those that many in information security face on a daily basis.

There are countless lessons from the Challenger disaster that security professionals can learn from. Some of the lessons learned include, in no particular order:

Decision to Launch 
  • NASA was under significant pressure to launch which proved to ultimately undermine safety.
  • Information security teams are often pressured to approve deployments when they have yet  to undergo or fully complete security validation.
  • When products or systems are deployed prematurely, the effects can be devastating.
 Follow Manufacturer Instructions
  • Morton Thiokol produced the two solid rocket boosters for the shuttle and knew in detail their safe operation. NASA chose not to listen to their advice.
  • Security hardware and software documentation must be read, understood, and followed. If not, the  consequences can be significant.
 Set Realistic Goals
  • The original NASA projected launch schedule for the shuttle was for 24 launches per year.
  • This goal was later criticized by the Roger Commission as unrealistic, which in turn created pressure on NASA to launch missions.
  • Information security teams can’t be given impossible tasks with ridiculous deadlines. The CISO must have adequate staff and budget if security is to be accomplished. 
 External Pressures
  • Morton Thiokol's management faced significant pressure from their boss, NASA, to launch. They initially said not to launch, but Morton Thiokol's leadership changed their opinion under pressure and said to launch.
  • This is what security contractors and consultants face from their clients. 
  • Unreasonable external pressures from the customer or internal management cannot be forced onto the information security team. And if they are, expect breaches and incidents.
 Don't Ignore Vulnerabilities
  • Test data showed potentially catastrophic flaws in the solid rocket booster O-rings, but neither NASA nor Morton Thiokol fully addressed them.
  • Security vulnerabilities, patching, and penetration test results must be addressed. They won't go away on their own and will wreak havoc if ignored.
 Incident Investigation
  • After the disaster, President Ronald Reagan created the Rogers Commission to investigate the accident. This was a very positive step in determining the cause of the accident. 
  • NASA also created the new Office of Safety, Reliability, and Quality Assurance. This was needed as the Columbia Accident Investigation Board concluded that NASA had not set up a truly independent office for safety oversight.
  • Security incidents will occur. It’s imperative that firms have an independent group that can fully investigate these incidents without worrying about retaliation or politics. 
Redesign 
  • After the disaster, there was a significant redesign to the space shuttle.
  • Similarly, systems must be patched or redesigned after a breach or vulnerability discovery to ensure that future security issues do not occur.

 

Challenger broke apart over the Atlantic Ocean 73 seconds after launch, resulting in the death of seven astronauts. Adam Higginbotham has written a fascinating book that details what led to that and uncovers significant details only uncovered here. For anyone involved in information security, there is a lot that can be learned from the disaster.

The Challenger disaster was a turning point for NASA. There are many lessons that CIOs, CTOs, CISOs, and others can learn from Challenger. No one should wait for a disaster to take action. Sadly, the individuals and organizations detailed in Challenger: A True Story of Heroism and Disaster on the Edge of Space did. For those who want to avoid that predicament, this is a great read and a call to action. 

 

Contributors
Ben Rothke

Senior Information Security Manager, Tapad

Security Strategy & Architecture Technology Infrastructure & Operations C-Suite View

infrastructure security security operations application security exploit of vulnerability incident response Business Continuity and Disaster Recovery

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs