Over the past month, I’ve been fielding a number of questions from friends regarding the disaster at the Fukushima nuclear facility (for excellent coverage of the technical aspects of the plant itself, I recommend this site built by the brains at MIT: http://mitnse.com/). While a majority of the questions focus on the basics of nuclear power generation and the specifics of the disaster from a technical standpoint, I have inevitably been asked what I think the ultimate cause was. This has led to some serious reflection of my time operating plants for the Navy and dealing with plant casualties and errors, both simulated and real. In the Navy a core focus of our training was the study of a number of accidents throughout the over fifty-year history of nuclear power, including stories of troubles at civilian and Navy plants. While human errors on the part of individual operators and regulators will ultimately be called out as primary causes, my experience persuades me to search for a better explanation.
First, there are the facts. The Mark 1 reactor design at Fukushima is over forty years old. The technological advances in design since that time would have likely prevented a great deal of the damage caused in Japan. Increased visibility of the plant environment through sensors, improved processing power, and a more informed system safety design all greatly reduce the likelihood that a Fukushima type disaster will occur with today’s newer designs. Perhaps more than these technological aspects of the Fukushima plant, however, I find myself questioning a different aspect of the design. How do you design for the integration of error-prone human decision makers in a complex system like a nuclear facility?
This comes to mind because as a relatively new junior officer and operator the tone and focus of our “critiques” (post-casualty assessments) was at first, quite a surprise to me as they were always targeted toward the structure and implementation of the underlying operational training process and procedures. These are broader questions of how an organization is structured to transfer and retain knowledge rather than the abilities of individual actors. As the young JO, I was wholly expecting a Navy issued butt-chewing and professional reprimand for my team and myself. Not the case. The Navy has long understood that real value in failure arises from generating a better understanding of how to integrate error-prone human decision makers with the complex system at hand.
Informed by these lessons, I tend to agree with Don Norman who has studied the TMI incident and argues that design failures are often times the real culprits that take a backseat to operators during the blame game. Norman tends to focus on interactions between humans and physical systems; however, I believe it is important to extend “design” to encompass not just the complex engineering systems but also the complex organizational systems that drive operations, maintenance, regulatory oversight, and emergency response.
Contributors to “Designing organizations for humans (part 1)”
Wondering why there are multiple contributors? At DsA, we work in teams. Even on blog posts, we often work together or ask for others to take a look at the post before we post it. When we do that, the pictures of those that wrote the post are larger than those that edited the post.