top of page
douglashubbard

Why Data Breaches Aren’t Like Plane Crashes, but Should Be



On October 29, 2018 a Boeing 737 MAX departed from Jakarta on a routine trip to Pangkal Pinang in Indonesia. Shortly after takeoff the pilots struggled to maintain control of the plane as the nose continually pitched downward. After what must have been an excruciating fight for control, they were unable to maintain altitude and the aircraft crashed into the Java sea tragically killing all 189 passengers. The flight only lasted 13 minutes.


A few months later on March 10, 2019 a second Boeing 737 MAX ran into a similar issue. Ethiopian Airlines Flight 302 left Nairobi, Kenya and crashed six minutes later.


The two crashes made international news and all Boeing 737 MAX planes were grounded worldwide. Different versions of the Boeing 737 have been in service since 1967. It is a reliable and well respected workhorse of a plane. How could these two crashes happen? 



A photo of the crash site of Ethiopian Airlines Flight 302


The investigations

The reaction was swift from regulatory bodies, airline manufacturers, airline companies, international governments, and law enforcement agencies. People from different countries, disciplines, and backgrounds cooperated with the singular goals of discovering what had happened and how to prevent it from happening again.


After exhaustive review the blame lay at the feet of a new type of software created by Boeing called the Maneuvering Characteristics Augmentation Systems or MCAS. The two planes were actually a newer version of the venerated 737 with newer, larger engines retrofitted onto the existing frame and branded the 737 MAX. Boeing found that the new engines had a flaw which could cause the nose to pitch upwards or downwards during takeoffs and landings. In an effort to correct the problem, Boeing engineers created the MCAS software to automatically correct the pitch of the plane if it suddenly raised or lowered too dramatically. However, the fact that the MCAS existed was not widely known. 

As part of their marketing effort, Boeing pushed that the new 737 MAX was so similar to existing airlines that little to no training was required for the new pilots and crew. Some received as little as two hours of training via an iPad.

In both crashes the MCAS had malfunctioned and caused the plane to continually dip its nose until the pilots lost control and crashed.





The fix

Every 737 MAX in the world was grounded from March 2019 to December 2020. Each plane had to undergo numerous technical and non-technical changes before they were able to fly again including


  • fixes to the software with fail safes in case of malfunction

  • improved diagnostic lights in the cockpit

  • increased training on for the crew

  • the ability to override the software quickly


In other words, airlines improved their technology, processes, and training when operating this aircraft. What is remarkable is that these changes were not just done with a small subset of planes within one airline, but every 737 MAX in the world. It also serves as a cautionary tale for new planes that will be developed in the future.


It was a deeply sorrowful tragedy that those passengers and crew lost their lives. The entire airline industry is now safer as a direct result of their loss.


How does this apply to information security?

Whenever an airline crashes there is a tremendous amount of collaboration between many companies and government bodies. Air travel is extremely safe, but everyone wants it to be safer. The regulations and safety nets that we now have were earned through the literal blood of those who came before us.

But in the infosec world, what happens when there is a significant data breach or system compromise?


  • Do the customers know about it?

  • Do regulatory agencies know about it?

  • Do other companies know enough that they don’t make the same mistake?


The answer to all of those questions is it depends. For particularly large incidents others may learn in generalities what happened, but rarely do we get specifics.


What would have happened if the airline industry treated the plane crashes the same way that other companies treat a data breach? Likely a few press statements would be released. Maybe customers would have be contacted with vague platitudes about how customer security is their number one priority. Hopefully the MCAS on all 737s owned by that particular company would have been upgraded and training improved. But most likely other companies flying the 737 MAX would have to learn the hard way that they needed to change their software, processes, and training.


How many companies’ IT systems are metaphorically flying with a defective MCAS today?


What should we do about it?

The aviation industry is fundamentally different from the airline industry. I do not think they should be treated exactly the same way. But as a practitioner of information security I think we should pay careful attention to news stories and current events when other companies have had publicized security events. We may never know the full story but that doesn’t mean we can learn a great deal from others’ mistakes. 


This information doesn’t just benefit security engineers and CISOs either. I have found a tremendous amount of success from simply sharing current stories in the news with developers, product managers, and engineers. 


In multiple companies I have implemented a somewhat informal process of recurring meetings where we discuss three or four data breaches or security incidents that appeared in the news in the last month. It is a fun and low pressure environment to spark a lot of security discussions and raise the security IQ of the entire organization. The best part is the content writes itself.


I find that reading about security breaches is also a fantastic way for beginners looking to break into the security industry and learn the ropes and trends.


Where can you learn about recent security events?

Data breaches and security incidents are in the news almost every day. However, most journalists are not technical individuals and seldom include much detail about what happened.


I think these places are great resource to look for if you want more information.


  • The annual Verizon and IBM Data Breach Reports - These are first place everyone should go. They are probably the best in the industry right now to learn trends. They also include great statistics such as the cost of data breaches, effectiveness of controls, and the chances of having a data breach (15% in a given year by the way).

  • The Register (https://www.theregister.com- This is a journal based in London and focuses more on news in the UK but has excellent technical and cheeky write ups.

  • Krebs on Security (https://krebsonsecurity.com- Brian Krebs is a fantastic journalist turned security blogger who gives very technical descriptions.

  • Twitter - Love it or hate it, I find that a well curated Twitter feed following good security professionals actually has extremely good security news. It's greatest advantage is its speed. I often find out details of a security incident on Twitter before I can find them on other sites.

  • The Congressional Report on the Equifax breach (https://republicans-oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf) - Rarely do we get details or specifics about how a breach happened. The Equifax breach is a stark exception to the rule. The U.S. House of Representatives published an open and scathing report on what happened at Equifax. The report is nearly 100 pages, but I strongly encourage every security practitioner to at least read the technical sections. It is surprisingly accessible and easy to understand, but what happened here was a warning that we should all learn from, paid with the privacy of millions of individuals.


In closing

Pay attention to those around you. Security is often stuck in a back corner of IT somewhere and can be an isolating position. But there are so many other security practitioners out there in different companies that you can learn from their (or more likely their managers’) mistakes.


In full transparency this is not an idea that is unique to me. I heard a similar idea at a security presentation over a decade ago, and really took it to heart. I wish I could credit that speaker today but my memory of who delivered that presentation or which conference it was fails me. If anyone is familiar with the presentation feel free to drop me a line and I’ll update this post.

7 views0 comments

Recent Posts

See All

Comments


bottom of page