top of page
Top of Page

House of Commons Health and Social Care, and Science and Technology Committees
Coronavirus: lessons learned to date
Sixth Report of the Health and Social Care Committee and Third Report of the Science and Technology Committee of Session 2021–22

A full record of these proceedings can be found here.

See my joint submission with Dr Nigel Lightfoot CBE to these  Committees.

The Joint Committee's report was published on 12 Oct 21.

Part 1 of my analysis is to assess, from an operational perspective, the quality of the recommendations. In these circumstances the quality criteria to to examine the probability that the recommendations will be implemented (output) and will achieve the desired end state (outcome). My assessment is here.

 

The results of Part 2a of my analysis can be found in the first column of the table here.

Part 2b of my analysis maps the categorisation in order to visualise the areas of the system under examination are recommended for change. This is set out here.

Part 3 of my analysis rates the recommendations for their operational validity can be found in the remarks column of the table. I chart these rating by percentage here.

The next step is to examine the Governments response which is due to be published on 12 Dec 21.

Submission to House of Commons Select Committees (Dec 20)

CORONAVIRUS: LEARNING LESSONS

Submitted on 28 Nov 20 by:

Dr Michael Lauder MBE

Professor Nigel Lightfoot CBE

 

Summary

 

Identifying lessons to be learned is comparatively easy, making sure those lessons are learned is much more difficult and is often forgotten.

 

We submit our evidence on a systematic approach to learning lessons in the belief that it offers an alternative and more reliable way of learning from the past. We submit our evidence as two individuals who have a long-term professional interest in studying organisational failure and public health. In this submission we would wish to draw the committee’s attention to flaws in the way society learns from the past. We provide examples of these flaws and suggest that anyone conducting an inquiry might like to learn from the past by taking note of them. We suggest that most inquiries deal with complex rather than merely complicated matters and this has consequences for the way recommendations are drawn up. We demonstrate how we use Disaster Incubation Theory as a way of handling this complexity. Finally, we look at how we might use the Pandemic Strategy as a vehicle around which we can collate and disseminate past learning in the hope that we, as a society, will do better next time.

 

Introduction

 

The premise behind the call for evidence is that we, as a society, should learn lessons from our experience of the Coronavirus. More explicitly, we hope that we can learn from our experience of the COVID19 crisis so that we can ensure that our preparedness for the next pandemic will be better. The question we would ask is, as we seem to have failed to learn from our past experience of pandemics, why should it be different this time?

 

We submit our evidence in the belief that it offers an alternative and more reliable way of learning from the past. The Inquiry has stated that is wishes to examine “the UK’s prior preparedness for a pandemic”; our expectation is that the intent is to do better next time. We believe that our work has something to add to this debate.

 

I am Dr Michael Lauder MBE and I work as a private individual researching and writing on organisational failures (and why we do not learn from them). After over 20 years as a military engineer and 10 years of consultancy, in 2008 I undertook doctoral studies at Cranfield where my focus was on the question “why organisations fail”. These studies made me examine how organisations get themselves into trouble, how they try to avoid such situations and what we can do to learn from the past.  On the completion of my studies my first book, “It should never happen again”, published in 2013, examined why we, as a society, fail to learn from inquiries. My book examined the nature and structure of inquiries, the types of recommendations they make and suggested reasons why they failed to stop organisations repeating the same mistakes time and again. Over that last five years working alongside a Belgian Crisis Management Consultancy, my research has been examining ways that the barriers to learning for our experience may be overcome. In January 2020 I wrote a critique of the Grenfell Inquiry Part 1 for the Antwerp Fire Brigade outlining why I considered there to be dangerous and false lessons contained within that report; my conclusion was that the basic paradigm used by the inquiry team (which I have labelled the Perfect World Paradigm) was not fit for purpose when it comes to learning from the past. My research continues to look for and promote an alternative.

 

As the COVID19 crisis developed, I considered it to offer a valuable case study as it was developing in real time. This enabled the roles of foresight and hindsight within the learning process to be examined. Aware of the limits of my own expertise, I approached Professor Nigel Lightfoot CBE for his expertise in public health matters:   Nigel is a former Director of Emergency Response at the Health Protection Agency and now has his own private company where he continues to focus on emergency preparedness, crisis management and  the CBRN terrorism threat.

 

Our interest in this subject was sparked by a stark inconsistency. In 2019 the Global Health Index produced by The Economist Intelligence Unit in conjunction with John Hopkins Health School ranked the UK 2nd out of 195 countries for its pandemic preparedness. However, from the start, the UK ranked highly in the list of countries suffering from COVID19 related deaths. While there may be some debate about the way these numbers were collected, it is still clear that the UK's response to COVID19 was not as effective as had been hoped. As part of our examination of the reasons behind this discrepancy, we have looked at the preparations made by the UK for handling pandemics. In this case we will be looking at the Department of Health's Pandemic Strategy published in 2011.

 

Over the course of the summer and as part of our ongoing research, we have watched the development of crisis through the academic lens of Disaster Incubation Theory (DIT). This theory was first developed by Professor Barry Turner in 1976. While many alternatives have been offered, we feel that DIT still offers the clearest way to segment this complex debate. For a fuller description of how this tool may be used see my 2015 book “In pursuit of Foresight”. In the simplest term DIT divides crises and learning into six stages. These are:

 

 

 

 

 

 

 

For the purpose of this submission, our interests are in:

 

  • Stage I - "notionally normal starting point", this is when the hazards are defined and the countermeasures are put in place to prevent the hazards manifesting themselves or to control their effect to what is acceptable.

  • Stage II - (Disaster) Incubation Period: this is the period of normal operations where the initial actions considered necessary to control the hazard become undermined which, in the end, enables the hazard to manifest itself in a way that is not acceptable to society. (This should not be confused with the disease’s incubation period {in this case COVID19}.)

  • Stage III - Precipitating event: this is when the society recognises that, despite their efforts, the hazard has manifested itself in a way that is not acceptable to society.

  • Stage IV - Onset: this is the period during which the forces that perpetuate the hazard are seen to act.

  • Stage V - Rescue and salvage: this is the period during with the forces seek to control the hazard are deployed. The relationship between stages IV and V are best envisaged by using forcefield analysis.

  • Stage VI - Full cultural readjustment: In the model this is when a new normal is established and stakeholders learn from the past.

  • How the Pandemic hazards and countermeasures were defined in Stage I and what was done.

  • How Stage II was used to prepare the capacity necessary to handle a future pandemic and how and why the measures defined in Stage 1 were undermined.

  • Stage VI is recognised as a period of learning and readjustment. However, this assumed that new learning comes out of the most recent experience. My study of inquiry reports shows this not to be the case, most lessons learnt are simply a repetition of some previous learning applied to a specific case. Therefore, to learn, it is now more important to learn why the lesson was not learnt previously than it is to just identify that something had gone wrong, as this provides nothing new.

  • An examination of Stages IV and V are not a concern of this submission as they would concern the conduct of the crisis management rather than preparedness.

 

Why do we not learn?

 

To make the best of learning opportunities, we need to recognise why we fail to learn from the past. While there is a field of study that looks at organisational learning, its main focus is on how learning is delivered rather than how learning is extracted from experience. In contrast, inquiries may choose to learn from aircrash investigation practice where they focus on why an accident happened so that they can identify issues and resolve them.  Their focus is on the future, not the past. Our research into the conduct of public inquiries would suggest five issues that should be considered when we try to learn from the past. These are the role of blame, the mixing of hindsight and foresight, the need to see the whole, the structuring of recommendations, non-equidistancing of recommendations and looking at the problem the wrong way.

 

Greater desire to blame than to learn

 

In general, inquiries have a twofold function. The first is to learn from the past. This is normally the espoused purpose of an inquiry but, in practice, this function is often superseded by the second. The second is to determine who might be at fault for any unwanted events that befell the organisation. This function is easier to accomplish and it is a more natural role of those tasked to conduct such inquiries. Research into practice would suggest the need to separate efforts to learn from the need to allocate responsibility.

 

Hindsight versus foresight

 

A frequent mistake made by inquiry teams is to confuse the role and application of hindsight and foresight. This can be clearly seen in my analysis of the Grenfell report. When judging the viability of a course of action it is necessary to determine what was known at the time and to exclude what could only have been known with hindsight. To confuse the two often leads to unrealistic recommendations that do not enhance our learning from the past.

 

No sense of the whole

 

Inquiries need to recognise that they are often dealing with complex (non-linear) rather that complicated (linear) systems. In complex systems changes that may seem minor can have significant effects. It is therefore necessary to try to understand how the whole system interacts before suggesting how it may be changed. There is clear evidence that a recommendation from one inquiry has been the source of a following unwanted event. Whether you think of it as a hypothesis, academic or lay theory, systems mapping or just as a Balance Scorecard “success map”, it is necessary to have an overview of the complete system that you are trying to improve. This is to enable those making recommendations to have a clearer idea of how the whole comes together and to foresee how even the simplest recommendation may have substantial unintended consequences.

 

Structuring of Recommendations

 

The way recommendations are structured often obscure their intent and the remedy they propose. From the evidence gathered for “it should never happen again”, the most common fault is for the recommendation to present some aspirational statement that does little more than state what everyone recognises as being the intent of the system. These do little more than identify a problem that is all too apparent. Many other recommendations just suggest a problem be examined. Comparatively few recommendations identify a solution to a problem and therefore qualify as learning. If inquiries are to claim to offer learning, they must offer learning that is specific. While much work is still needed on improving the structure of recommendations, a simple test of their utility is to apply the same test as one would for performance indicators.

 

Non-equidistancing of recommendations

 

When inquiries list their recommendations they are often presented as if they were standalone actions. This is clearly not the case as set out above. They are in fact a new input to an already complex system. If the desire to learn is driven by the hope that “it should never happen again” then there must be the question of how much does actioning the recommendation reduce the probability of the event happening again. Not all recommendations appear to be of the same value if learnt. Each recommendation will affect the probability of the unwanted event reoccurring differently; they may be seen to be at different distances from the solution. There is therefore a fundamental hierarchy of priority in all sets of recommendation that is never recognised. This needs to be addressed. However, there is another flaw, even more fundamental, in the recommendations offered. Recommendations are often produced as the “tweaks” necessary to perfect the existing system. This approach is flawed as discussed next.

 

Wrong way of looking at the problem (Wrong Paradigm)

 

Recommendations are often produced as the “tweaks” necessary to perfect the existing system. The fundamental assumption here is that the system or process can be perfected. We would question this assumption. Our work suggests that two distinct world views exist. The first based around “what would be ideal” has been called the Perfect World paradigm. The second based around “the world as it really is” has been labelled Normal Chaos. Let us look at each in turn.

 

The Perfect World paradigm was first labelled as such in 2013 in the book “it should never happen again”. The basic proposition of this paradigm is that if we recruit the perfect people, produce perfect plans, train them perfectly, supply them with exactly the right resources (including perfect unambiguous information) and execute the plan flawlessly (eliminating all slips and lapses) then the desired outcome will be delivered. It is seen as being up to the organisation to create these perfect conditions. Within this paradigm is the belief that individuals should be able to learn, retain and use the knowledge they require perfectly. All of this perfection is supported by having perfect foresight and individuals should be blamed and punished where they fail to achieve these standards. Embedded within this construct is the desire to remove uncertainty and to control the world around us through the use of logic and the reduction of all problems, no matter how complicated, to a linear format where cause and effect are seen to be directly linked. The label Perfect World paradigm is used to reflect the phrase often heard when discussing failure; that is “but in a perfect world …”. However, as already stated, no system can be perfect as they will always fail in some ways.

 

The term Normal Chaos is an homage to Charles Perrow’s Normal Accident theory which points towards complexity (sometimes referred to as wicked-messy problems) as being a key source of failure. Complex systems are, by their very nature, non-linear. That is, within such systems, everything affects everything else to a greater or lesser extent. This complexity therefore leads to emergence. Emergence is when the complex interactions lead to unexpected results while the system is seen to be operating normally. Another feature of complex systems is disproportionality; this is when small inputs can dramatically change the outcome or when large inputs make little difference. With such systems it is therefore not possible to accurately predict change judged just on the size of the input measure. Finally, complex systems are open; such systems cannot be seen in isolation of the environment within which they operate.

 

COVID19 provides a good example of normal chaos. The way the disease spread around the world was not linear. The pattern was not a clear progression and could only be established in hindsight. Different unpredictable features of the crisis have appeared over time. It has grown disproportionately: what appeared to be a small local outbreak of the disease has had worldwide ramifications. And finally, we can clearly see that the UK economy is open to influence from around the world and cannot be managed in isolation.  

 

Within the Normal Chaos paradigm it is accepted that things will go wrong, things will be unknown and we will be forced to learn through trial and error.  The measure of systems is therefore not their ability to operate error free but their ability to achieve their goals despite their imperfections and any missteps taken. This idea is at the heart of thinking on robust and resilient systems.

 

There are many ways to cut a cake and there are many ways to see and make-sense of the world around us. In both it is however better to stick with a single approach if you are not to end up with a mess. Few would argue against the perfect being ideal however the question is whether this is possible to achieve. In trying to manage a crisis, such as COVID19, should we be driven by the impossible ideal or a more realistic understanding of the problem. The question for each person to answer is whether they should focus on the ideal or try to see and understand the world as it actually is.

 

The public debate on the COVID19 crisis is being hampered by the mixing of these two paradigms. Every day we see this happening at the daily briefing. It is common to see the journalists basing their questions within the perfect world paradigm and the respondent’s answer coming from the perspective of normal chaos. While this ritual dance may suit each’s purpose, it does little to clarify the real issues. At the least the watching public should recognise this dance for what it is.

 

The question for this joint committee is, does it wish to learn from the past in the way it conducts inquiries so as to not make the same mistakes as others have before and how does it see the world?

 

 Let us now look at the implications for learning about preparedness.  

 

Preparedness

 

In our examination to date of the state of pandemic preparedness, we have focused on the UK’s pandemic preparedness strategy (dated 2011) and the lessons learnt from previous pandemics. We see strategy documents as being key vehicles for capturing and distributing learning from the past. More precisely, we have focused on the recommendations of the Hine report (2010) and those produced by the Swine Flu Critical Care Clinical Group as examples of learning from the past. We have looked at whether the 2011 strategy refers to each recommendation, we have looked at the nature of those recommendations, and finally, we are looking at the effectiveness of these recommendations. In my previous work I have questioned how inquiries come to their conclusions and formulate their recommendations (see my analysis of the Grenfell report for an example.) Our goal for this research is to try to determine why recommendations may not achieve their intended goals and what might be done to improve the likelihood of success.

 

In terms of DIT, we see the Hine Report and the Swine Flu Group report as an opportunity to reset the pandemic clock (Stage I) with the production of the 2011 Pandemic Strategy. Stage II would encompass the period between 2011 and the present. In this time the model recognises two forces at work. The first comprises the efforts taken to maintain the strategy as produced and then to enhance it where necessary. The second set of forces are those that undermine the strategy.  This second set of forces may include changes to the environment that make the strategy as written irrelevant or they may be the organisation’s failure to take the action necessary to fulfil the strategy. Our research has shown that greater learning takes place if we focus on these types of ideas and ask what happened and why.

 

In the Perfect World sought by most inquiries, the organisation would have been expected to take the opportunity to capture perfect knowledge from the past, produce a perfect plan and then implement it perfectly. The fact that these House of Commons Committees are holding these hearings is evidence that it did not happen in this case. The question for the committee is therefore whether this is a fruitful line of inquiry or should we be seeking ways to make our imperfect system more robust and resilient.

 

To this end, we have examined the 2011strategy in order to identify weaknesses. In 1996 Lee Clarke & Charles Perrow wrote about ‘Prosaic organizational failure’. In this paper they identified what they called “fantasy documents”; they stated that these are organisational plans that are drawn from a quite unrealistic view or model of the organisation and are rarely tested against reality. The fantasy they identify is that everything will work right first time and that every contingency is known and has been prepared for. Our question was whether the 2011 pandemic strategy was a fantasy document.

 

The first step must be to establish what is expected of a strategy. While this subject is widely debated, we use the definition posited by Hoverstadt and Loh. They say it is "the way forces manoeuvre for advantage against an enemy (competition)." In the case of COVID19, the enemy is clearly the virus. They go on to say "strategy is about using the resources (including time) at your disposal to change your position relative to your environment (changing which structural couplings you have, or the nature of them, or both), so that you can thrive there on your own terms."  Therefore, in terms of COVID19, we would expect any pandemic strategy to describe how the organisation intended to use its resources to cope with a future pandemic.

 

From our examination of the 2011 pandemic strategy we identified the following weaknesses.

 

In its first assumption the document confirmed that the pandemic hazard was identified as a new subtype of the Influenza A virus. It does however also say that the plans could be adapted and deployed for scenarios such as an outbreak of another infectious disease. It is not clear however how the “could be adapted” was to be implemented.

 

In order to ensure that the strategy is robust, the documents would be expected to set the boundary within which the Department should plan and then outline how they will cope with circumstances that fall outside of those boundaries. The document would then be expected to outline the future decisions to be taken, the data needs to make those judgements and who will be expected to provide that data. These omissions mean that the document is not as robust as it could be.

 

The document does provide both objectives and phases. Either would have provided a coherent structure for the narrative; neither was used. Using the objectives, the structure could have been based on how each objective was to be achieved. The second approach could have based the narrative on a temporal framework. In the end, the document did neither and therefore became a scattering of good ideas rather than a strategy.

 

The document needs to be structured by objectives. These are clear yet they could be clarified further by setting out measures of success.

 

The document does list assumptions however it does not list all the assumptions in the list of assumptions. In addition, not all objectives have related planning assumptions. The purpose of the assumptions is to state the boundaries around the possibilities within which the strategy is required to cope. Therefore, if boundaries are not stated, it must be assumed that the strategy will be designed to cope with any possibility that arises. This is likely to be a fantasy.

 

All assumptions have implications for a plan or strategy. Few implications were drawn and thereby the document failed to gather the foresight readily available.

 

Chapter 3 seems to be an unwitting attempt to set out the principles meant to guide the delivery of the strategy. These principles need to be clearly thought-out and articulated as such.

 

Table 1 sets out the proportionate response to pandemic influenzas. It has a major structural failure in that the final column focuses on ‘Public Messages’ yet the overarching objective is ‘Instil and maintain trust and confidence’. This makes the document incoherent and therefore liable to dissolve into being a fantasy.

 

The document fails to discuss whether preparation needs to be funded out of existing resources and when and where extra resources will be sought. A strategy would be expected to state whether it was envisaged that the resources required would come from:

 

... Routine Stocks (in everyday use)

 

... Just-in-case Stocks (strategic reserves)

 

... Just-in-time Stocks (purchased during the crisis)

 

Performance Management (governance) and Quality control (QA/QC) are important parts of the strategy to guard against failure to deliver or subsequent drift. The document does not state what will be done to assure the Secretary of State that the strategy is not just a fantasy.

 

Finally, the way the Government handled the COVID19 crisis is not in line with the strategy. For example, the strategy assumes that there could be up to 315,000 additional deaths within 15 weeks. This assumption has turned out to be politically unacceptable. If the assumption had been based on only, say, 20,000 additional deaths as being acceptable, then the content of the strategy would have had to be very different. This suggests that this strategy document represents something more symbolic that practical.

 

All the issues raised above are to be found within our previous (societal) experience. This must therefore raise the question as to why the department had failed to incorporate them into their strategy construction process. From this and other evidence collected, we would suggest that this points to a failure much larger than just one limited to this Department and this document. If this committee wishes to learn from the past it must reconsider how it extracts and disseminates learns learnt.

 

Conclusions

 

In this submission we would wish to draw the committee’s attention to flaws in the way society learns from the past. We provide examples of these flaws and suggest that anyone conducting an inquiry might like to learn from the past by taking note of them. We suggest that more inquiries deal with complex rather than merely complicated matters. We demonstrate how we use Disaster Incubation Theory as a way of handling this complexity. Finally, we look at how we might use the Pandemic Strategy as a vehicle around which we can collate and disseminate past learning in the hope that we, as a society, will do better next time.

 

To close, we have to note that a major factor in why lessons are not learnt is politics. We just need to go back to 2005, when the Inquiries Act was going through the Houses of Parliament and note the comments of Lord Heseltine. He suggests that if you were to have an inquiry, you should first reach your conclusion and then choose your chairman before setting up the inquiry. This was to ensure that the right people are blamed. While we accept that politics are a necessary consideration, we must also be aware and guard against its more malign effects if we truly wish to learn from this resource intensive process.  

Submission

Analysis Part 1 & 2a

In this part of my analysis I use my recommendation quality assessment tool for the first time. The purpose of this assess is to examine the utility of my assessment process so I can refine it.

My initial take from this assessment is that there does appear to be a gulf between the political and practical utility of these recommendations. While I can only assume that the politicians who produced these recommendations thought that they served their purpose, this examination does reinforce my concern about such recommendations ensure such events 'will never happen again'.

Jt Cttee Part 1 & 2a
Jt Cttee Part 2b

Analysis Part 2b

 

In the diagram below I map the categorisation on my perfect world model.

Analysis Part 3

 

My analysis of these recommendations points to these recommendations being political rather than operational orientated.

 

As these recommendations are aspirational, the large majority have been rated as weak or poor for there operational effectiveness. That is, the probability that these recommendations will drive the change necessary to ensure the country responds better to the next pandemic is seen as being low.

Jt Cttee Pt 3

Last updated:  12 Dec 21

bottom of page