Disaster Incubation Theory (DIT)

Diagram 1

My research looks to take methods and tools that are used with hindsight and examine whether they can be used to generate foresight.

 

The current focus of my research has been to take Barry Turner's Disaster Incubation Theory and to see its utility to foresight. In my second book 'In Pursuit of Foresight' I reshape Turner's original model.

 

Diagram 1 shows the original model and diagram 2 shows it re-imagined as a tool to promote foresight. The key issues that come out of this research focus on preventing failure, preparing to react to failure and how we learn from failure. While these might seem straightforward the actual issues are not. I use this model to think in terms of prevent, prepare and learn. However, if we view this is terms of a circle, it would be better if the sequence was learn, prevent, then prepare.

 

  • Under 'prevent', the issue is how can we prevent that which we can't imagine will happen.

  • Under 'prepare', the issue is how can we prepare for all the possible scenarios that might befall us in a crisis.

  • Under 'learn', the issue is about determining what lessons can we take from the plethora that exist.

Some basic notes on each stage are set out below.

 

Theses will be developed and updated in the near future.

Website Slides-140827(1)

Diagram 2

Web-Slide2
 

Stage 1 - "notionally normal starting point"

After an organisational failure,  Stage 1 would be used to reset the organisational requirement. It should be used to redefine the hazards that the organisation must be prepared to face and then it would see the resetting of precautionary norms associated with those hazards based on all the learning available at the time. Once the new norms have been set, they would be implemented and maintained as part of Stage 2.

 

If we look to explain this in more detail, we see that Turner talks of a "notionally normal starting point" which he sees as being about the identification of a "set of culturally held beliefs" about how the world works. To put this another way, it is about establishing the group's mental model of the world as a system at a point in time.

 

Turner sees this task being in two parts.

 

(a) The first part is about establishing "Initial culturally accepted beliefs about the world and its hazards are at this point sufficiently accurate to enable individuals and groups to survive successfully in the world." He goes on, "this level of coping with the world is [seen to be] achieved by adhering to a set of normative prescriptions."

 

(b) What logically follows is then the need to establish which "associated precautionary norms set out in laws, codes of practice, mores and folkways ... are consonant with accepted beliefs". These are the "culturally accepted view … designed to assure safe organisational navigation of these hazards."

 

I would expand further on Turner's ideas. Turner wrote the model to describe its single linear use. I see it as a recurring process (Circles) of learning, action and failure. As such, I see Stage 1 as being a review conducted as a single point in time that establishes a baseline for what comes after. This baseline should establish what has been considered and, just as important, what has not. This baseline needs to justify each of these decisions.

 

In a Perfect World, we would like to consider every relevant factor. In the real world this is unlikely to be possible. ETTO considerations becomes an important factor at this stage. This is affecting both the hazards to be considered and the knowledge and experience gained about managing these factors. Debating these factors is the process at the heart of this stage.

 

I would see the result of this stage as being a set of planning assumptions. These assumptions can be broken down into two groups: the first might be considered to be risk management and the second would consider crisis preparedness.

 

  • Under risk management the assumptions would list

    • The hazards considered and the ways they might arise.

    • The precautionary norms associated with the management of each  hazard. These would include laws, regulations, guidelines, standards, routines, culture and other codes of practice seen as being needed to manage the hazards.

    • The indicators of the operational boundary conditions outside of which the precautionary norms may no longer apply.

 

  • One of the assumptions must be that the precautionary norms will fail and normal operations will develop into a crisis (or disaster). Given this assumptions then, under the heading of crisis preparations, they should list a further set of assumptions about:

    • The types of crises that might emerge.

    • What capability will be required to manage these types of crises.

    • How changes to the operational environment that affect the boundary conditions will be monitored and feedback into the risk management process. This is about identifying factors that will lead to practical drift.

 

In my view, once the hazards and norms have been set then the process moves to Stage 2. Some other writers see this stage as including the work to put these norms in place. I do not agree for I see there being a continuous challenge to balancing production pressure, hazards and precautionary measures that is the essence of what happens in Stage 2.

 

Stage 2 - Incubation Period

 

Turner talks of Stage 2 being the period when "the accumulation of an unnoticed set of events which are at odds with the accepted beliefs about hazards and the norms of their avoidance" goes unnoticed. More simply put, this is the period of routine operations where those involved try to balance operational efficiency with safety. As they are flawed human beings, they will make misjudgements that will go unrecognised. These flaws in the system will accumulate to the point where they are suddenly exposed in what will become to be seen as a failure of the organisation. It should however be noted that this stage may be measured in days or decades.

Given that this is the period where problems incubate, I see there as being three interaction dynamics. These are:

  • Routine delivery of products and /or services.

Here I will only consider the last two.

Risk Management

I will first discuss the prevention of failure. In more technical language, these disasters or cultural collapses takes place because of some discrepancies (inaccuracy or inadequacy) between the accepted norms and beliefs and the way the system actually works. The consequence of these discrepancies rarely develops instantaneously. Instead, there is an accumulation of these discrepancies over time. The incubation period is a time of slow cultural collapse due to some inaccuracy or inadequacy in the accepted beliefs developed in Stage 1. It is common for discrepancies to develop and accumulate unnoticed. For this to happen, these events must fall into one of two categories: either the events are not known to anyone or they are known but not fully understood in the way that they will be after the disaster.

 

After the crisis or disaster, the failure to a recognise the warning signs and to put adequate mitigations in place will be seen as being the failure of foresight. In practice the reasons why warning signs are missed are more complex. When existing danger signs are missed it is likely to be a combination of them

[1] not being seen,

[2] given low priority,

[3] treated as ambiguous or as sources of disagreement, or

[4] considered insignificant because of psychological dispositions or for other reasons (often coming down to local politics).

 

These factors (and others) can all lead to the accumulation of events that combine to erosion of mitigation measures that in turn lead to disaster.

​​

Turner sees the precursor of failure as being:​

 

  • Where rigidities in perception and beliefs lead to erroneous assumptions being made.

  • The natural human instinct of reluctance to fear the worst overtakes evidence or common-sense that causes people to minimise emergent danger.

  • The decoy problem where one problem may draw attention away from another more serious problem.

  • Organisational exclusivity: This causes the organisation to ignore warning from outsiders while in the involvement of strangers, complaints of danger from non-experts from outside the organization are simply dismissed as the babble of uninformed alarmists.

  • Information difficulties: there are always challenges regarding information handling in complex situations.

  • Failure to comply with existing regulations.

 

However, there are many others. A range of these precursors are listed in the side bar.

Crisis Preparation

Alongside considering how to prevent failure, the organisation needs to prepare for failure. This preparation starts with determining [1] the nature of the likely failure, [2] the strategy that will be used to manage the ensuing crisis and [3] the "just-in-case" capability that may be required. At this point I will only provide a few words on each.

Every organisation will see that it is liable to several modes of failure. Once again ETTO will come into play and therefore the organisation will have decide which of these modes it will prepare to address. This approach will inevitability leave gaps that will have to be management on a just-in-time basis and will be condemned as being a "failure of foresight".

The organisation will have to decide on the strategy it will adopt to manage the crisis. This will include both operational and strategic issues. At the operational level the organisation will have to decide what will be needed to regain enough control for them to continue routine operations. At the strategic level they will have to decide whether their approach to crisis management will be one based to minimising their liability for what happened or whether it will be based on regaining their stakeholder's trust.

Finally the organisation will need to set aside the resources it will require to manage any future crisis and then they should rehearse the routines that will be required when the time comes. While it may be tempting to build a unique capability to handle such situation, experience shows that the addition capability should be extensions of what happens routinely if an inexperienced crisis team is to perform to its best at these stressful times. I would base these routines on the following principles.

During the incubation period, members of the organisation should be looking for any warnings of a pending failure. The study of this subject is often labelled as the examination of Weak Signals.

 

Organisations are likely to teeter of the edge of disaster on numerous occasions during this period. These disasters may have been avoided either by deliberate action or by luck. The name given to the period when the organisation is on course for a failure has been labelled the recovery window.  (I will post more on this subject and its relationship with the 30 minute window in due course.)

Sources of Failure

 

Here I list a number of the sources of failure listed within academic literature. Some I define, some I just leave the reader to wonder what they are. Scholars have cited each as being the reason behind some organisational failure. In a perfect world  every risk management system would be aware of these factors and would review their process to ensure that they did not emerge. The ETTO principle explains why this does not happen.

 

I list the factors here in alphabetical order. I list them here as a catalyst to thinking. Each reader may wish to ask themselves which of these are present within their organisation and which ones will be the cause of the coming crisis.

 

  • Amoral calculations

  • Atrophy of vigilance

  • Blind spots

  • Can-do culture

  • Cognitive dissonance

  • Compound abstraction

  • Cosmological episodes

  • Darkside of leadership/ Toxic leadership

  • Denial

  • Design-based accidents

  • Desire to simplify/ reluctance to simplify

  • Distancing through differencing

  • Drift (practical/ safety/ strategic)

  • Dysfunctional interactions

  • Error-inducing organisations

  • Failure of communications

  • Failure of foresight

  • Failure of hindsight

  • Failure of imagination/ Requisite imagination

  • Failure of linearity

  • Fallacy of centrality

  • Fallacy of complete reporting

  • Fallacy of social redundancy

  • Failure to act

  • Failure to launch/ New Group syndrome

  • Fantasy documents

  • Focal awareness

  • Folly

  • Forgotten lessons/ Lost saliency

  • Forget to be afraid/ Lulling effect

  • Groupthink

  • Guilty knowledge/ Plausible deniability

  • Hubris

  • Illusion of control

  • Inattentional blindness

  • 'It couldn't happen here'

  • Impossible accident / Inconceivable accidents

  • Happenstance

  • Lack of reflection

  • Learned helplessness

  • Liability of newness

  • 'Lost the bubble'

  • Management distancing

  • Mirror imaging

  • Misinterpreting signals

  • Muddling through

  • Normalisations of deviance

  • One reason decision heuristic

  • Organisational attention

  • Organisational bias

  • Organisational distortion of information

  • Pluralistic ignorance

  • Problem of induction

  • Production pressure

  • Requisite variety (lack of)

  • Risky shift

  • Risk redistribution

  • Seat of understanding (lack of)

  • Selective perception

  • Self-justification

  • Signal filtering

  • Silent politics of time

  • 'slow Pearl Harbor'

  • Social amplification of risk

  • Social shirking

  • Structural secrecy/ Scarce knowledge/ Variable disjuncture/ Distributed knowledge/ Distributed intelligence

  • Surprise

  • Taboo subjects/ undiscussable

  • Talking past each other

  • Unresponsive bystander

  • Unrocked boat

  • Unruly technology

  • Verbatim compliance

  • Wrong kind of excellence

 

Stage 3 - Precipitating event:

 

The precipitating event is when routine operations becomes a crisis. In the words of Barry Turner, it is "an event arouses attention and transforms the general perception of Stage 2". He goes on to give as his example when the train crashes, the building catches fire, or share prices begin to drop.

 

The precipitating incident is the moment when all of the vague system vulnerabilities that arise during the incubation period come together and transform from an ill-structured problem into a well-structured one. These system vulnerabilities occur due to minor errors, slips or lapses by members of the organisation, they may be caused by a set of unusual environmental conditions or a technical problem with the process or the machinery. In the study of accidents that I conducted for my first book, it emerged that the crisis cascade (when routine operations descended into a crisis) often took less than 30 minutes. In order to define the precipitating event more clearly, we have to examine what we mean by the word moment.

 

It is tempting to say that the precipitating event is the occurrence that means that the crisis is unavoidable, that it is inevitable. While this might be a useful simplification, it is not entirely accurate. I will explain my point by using the example of the Apollo 13 crisis.

 

The Apollo 13 space craft suffered an explosion while it was en route to the moon in April 1970.  The explosion occurred when the crew "stirred the oxygen tanks". This was an authorised and deliberate act. Due to a fault in the machinery, a spark caused the oxygen to explode. The spark was caused by the damaged insulation; it was thought that this happened many months before the launch. The question then becomes, from what point did the crisis become inevitable?

 

For the sake of this argument, if we assume the design and the material were not at fault and that the act of installing the wire was the moment the fault occurred, then was the crisis inevitable from that point? While this would be the point when the recovery window opened, was the crisis inevitable? Maybe not. If there was a quality control process this might have spotted the problem. It might have been spotted when the sub-system was integrated into the spacecraft. It might have been spotted by pre-flight testing.  While the crisis was inevitable (could not have been prevented) after launch, there would also have been a period prior to launch where no further opportunities arose to spot the problem, was it inevitable after that point? A detailed analysis and hindsight is required to make this determination and therefore this is not a useful operational formulation. The merits of Turner's formulation can now be seen. He says that the precipitating event is when "an event arouses attention"'. The explosion of the oxygen tank certainty did that!

When I go slightly deeper, I will explore this idea further when I discuss the relationship between the recovery window and the Precipitating Event.

 

 A precipitating event often has links with many of the extensive chains of discrepant events in the incubation period. The precipitating event is immediately followed by the onset stage (stage 4). It is the moment when the normal chaos of life becomes abnormal.

 
 
 

Stage 4 - Onset:

 

Turner describes Onset as "the immediate consequence of the collapse of cultural precautions becoming apparent". It represents the manifestation of the crisis and its immediate consequences.  In Turner's formulation of this idea, we see crises that are characterised by single large events such as a plane crash, a factory fire or a stock market crash.  These events can be seen as a disastrous moment in time that causes widespread damage, injury and death. The authorities’ response (Stage 5, Rescue and Salvage). Is their reaction to these events. The way Turner describes these stages means that he sees Stage 4 ending when Stage 5 begins.

 

Unfortunately, even some of Turner's examples do not fit this model. This is a fundamental weakness of his configuration. I would illustrate this in a hypothetical case of a plane crash that starts a wildfire that heads towards a fuel depot and a town. In this case, the plane crash might be seen just as the precipitating event that caused the wider problem that evolves over time. In this case the authorities may be regaining control in one area (the crash site), while still losing control of the wildfire, while putting in mitigation measures around the fuel depot in that hope that they will not lose control there. It should also be noted that these failures occur at a variety of speeds, intensities and scope.

 

Here we see an evolving situation which is influenced by both internal and external factors. In this example I would consider fuel load (undergrowth in the path of the wildfire) to be an internal factor and wind as an external one. From my way of thinking, internal factors are those that can be identified and influenced while you are unlikely to be able in influence external ones. From the perspective of using DIT as a catalytic framework, internal factors might however be those that have been considered and external factors would be the answer to the question "what have we not yet considered?"

 

​The onset stage is therefore how the immediate, direct, indirect and unanticipated consequences of the failure or collapse of cultural precautions manifest themselves. During this stage many of the culturally accepted beliefs about the world and its hazards are apparently no longer valid and the need to come to terms with this may also be a factor that prolongs the crisis.

The way that I therefore view Onset is that it encompasses all the dynamics that perpetuates the crisis. Onset is therefore closely related to Stage 5 "rescue and salvage". I would describe the space between as a "forcefield" (therefore being susceptible to Forcefield Analysis).

 

Stage 5 - Rescue and Salvage:

Stage 5 is what Turner labelled "rescue and salvage". He sees this as being the first stage of adjustment to the crisis. He says that it is the stage "in which rapid and ad-hoc re-definitions of the situation are made by participants to permit recognition of the key features of the failure and enable work of rescue and salvage to be carried out". To put this another way, it is the time to make sense of the evolving situation and to take action to start regaining control.

 

In addition to the resources normally available, those dealing with the crisis will also have the additional capability that has been set aside for such eventualities. Therefore, their ability to respond to the crisis will depend not only on the situation they face but also the time and effort spent preparing during Stage 2 (the Incubation Period). If any other resource is required, it will take time to define and procure. This will unnecessarily delay when control would be regained.

 

I see Stage 5 as being the converse of Stage 4. I therefore see Stage 5 as encompassing all the efforts to apply the resources necessary to regain the control lost at Stage 3.

 

Turner sees Stage 5 as being more than just the efforts to regain control. He also sees it as being the start of the learning process. He says, "when the immediate effects have subsided, it becomes possible to carry out more leisurely and less superficial assessments of the incident, and to move forward...". This process of learning is likely to be more informal and ad-hoc than what comes later. This will include the accumulation of tacit knowledge. How these lessons are captured and embedded is another issue that will need to be considered and addressed.

 

One feature of Stage 5 I would like to point to at this stage, is what become known as 'Hell Hour'. This phrase became common parlance during the early stages of my research with the Antwerp fire service. This was the period during which the temporary local command structure for a major incident was being established. This is a period of great uncertainty and confusion that is resolved when these command arrangements were in place and manned by on-call staff. The expectation was that this should be done within an hour. Experience shows (for example the Manchester Arena bombing and the UK Government's response to the COVID19 pandemic) that this period can in fact be very much longer than an hour.

In addition to the operational aspects of crisis management is the issue of (crisis) communications. This is especially important where the strategy is based on the restoration of trust. Where this is the case, the communications framework illustrated here has been found to be helpful.

 

This stage will conclude when the organisation has regained sufficient control over their operational context that they consider the situation is 'back to normal, even if that normal is different to the one that preceded the crisis. While Stage 5 will see lessons being learnt, formal learning is the role of Stage 6.

 

Stage 6 - Full cultural readjustment:

Turner sees Stage 6 ('Full cultural readjustment') as "involving an inquiry or assessment of what happened and thereby enabling the beliefs and precautionary norms to be adjusted to fit the newly gained understanding of the world". While Turner sees the assessment as being in the singular, in practice these events will spawn a myriad of inquiries and 'lessons learnt' sessions that take place throughout the stakeholder community. 

 

Turner sees these sessions re-examining the beliefs, norms, and precautions, and then making recommendations to make them compatible with "the newly gained understanding of the world.” Also, what he sees at this stage is the crisis being redefined as a “well-structured problem” that in turn allows the review to state which precautions would have been appropriate to have been applied. He sees in this process “the establishment of a new level of precautions and expectations”. While Turner sees the many benefits that this process can bring, he also offers some warnings.

 

Turner warns that “this full cultural readjustment is limited by the amount of disagreement which prevails among groups about the effectiveness of any new precautions adopted”. This points to the level of cross-understanding that needs to developed witin the stakeholder group. This would be needed to bring together and align all the recommendations generated during the learning sessions. Turner also warns that local politics may interfere with the learning process. I find this warning to be too limited. I see politics (both with large and small 'P') interfering with learning at every level of society. Seen in Normal Chaos terms, this pattern of behaviour is caused by the competing dynamic of “seeing the world as we want it to be", "seeing the world as we would like others to see it" and "seeing the world as it really is" and individuals advocating for others seeing the world as they want it to be seen.

 

It is this political dynamic that leads to a major theme of my work. I see a misalignment between the political nature of inquiries and the operational nature of learning. This basic tension (I would see this as being an 'attractor') explains why we constantly fail to learn from the past; this, in turn, explains why the same disasters do happen again and again. I hope that by identifying politics as an 'attractor' we can find ways of reducing its negative influence on learning in this context.

 

Once lessons have been identified I see the loop of this model closing; in my doctoral thesis I explained that this is actually a spiral as the circle is closed over time, so you are not back where you have started but are in another place entirely. The process goes back to Stage 1 when the organisation decides to reset its baseline for the system. This learning is added to what came before and the process starts again. As the process restarts those conducting it have a key decision to make. This is whether the do a partial reset or a full reset. I see a partial reset as being when the new knowledge is added to the previous position; while to do this may be expedient (see ETTO), it does contain the risk of taking forward false assumptions built on their previous flawed understanding. A full reset would go back to the raw data (the known truths) and build up a new set of assumptions and norms that is hopefully less flawed!​

I have reconfigured DIT to better match learning here

 

Last Updated: 31 Dec 21