In this section I will explore why we seem to fail to learn from the past. While public Inquiries and "lessons learnt" sessions are common, and while the public appeal for it (whatever 'it' is) to "never happen again", it does. The easy answer to why they do is to put these repeated failures down to the stupidity or negligence of those involved. However, we have to ask whether there is something deeper going on.
Before this can be established the multiple purposes of inquiry each creating their own dynamics. Some of these are:
The need to determine what happened and why.
The need to give victims a voice (to let them be heard).
The need to hold the system to account.
The need to learn from this event.
The conceptualisation that the Inquiry team had of their task and their analytical method.
The accuracy with which witnesses could recount their actions and intentions on the night.
On top of the internal dynamics are the external dynamics. While, in a perfect world, these should not influence the cause of the inquiry, in reality they are likely to have considerable effect. These may have included:
Lobbying by victim support groups.
Lobbying by those likely to be held to account.
Politicians for their own purposes.
The press for its own purposes.
These dynamics create an attractor that steers the Inquiry in an overall direction. What has to be questioned is the priority (energy) given to the desire to find the root cause behind the way those involved acted. One of the major dynamics of contemporary inquiries is the desire of victims to “see justice”.
Of these purposes, my research focuses on whether Public Inquiries are a suitable vehicle to extract operational lessons and to provide useful recommendations about how the system might be improved. To this end, I am examining whether we might be able to judge the likelihood of recommendations achieving this goal.
My research to date suggest two factors work against us learn from the past; these are the very nature of the public inquiry and politics (both with a 'P' and a 'p'). These provide the themes for my research
The theme running through all my work has been to seek ways to prevent organisational failure. Two key issues have emerged. The first concerns the complexity of everyday life; I examine this issue under the banner of "Normal Chaos". The second issue is our seeming inability to learn, at the organisational level, from previous experience. This failure leads to the often heard cry of "This should never happen again" or variants thereof. I will focus on this subject under the banner of "organisational learning".
As with all such fields of study, organisational learning contains many schools of thought. To me, the main focus of this field is on how the lessons from the past become embedded within an organisation. Here the assumption is that we know what needs to be taught and it is this process of embedding the lessons that needs to be optimised. This is not my area of interest. My area of interest is how we extract lessons from past operations and turn them into an effective failure prevention tool - in particular those that enhance foresight and prevent the erroneous use of hindsight. My prototypical methodology for such work comes from air accident investigation practice which gives me hope that such organisational learning is possible.
As an example of such a tool, I would offer the one that I developed for use by an organisation's risk and audit committees. This can form a conversation around how many organisations' risk management processes had identified the threat posed by the likes of COVID19 and, if not, why not? To address this question, I produced a discussion document which was designed to stimulate that debate. As with most of my work in this area, the model is based is Barry Turner's Disaster Incubation Theory. [Why I chose this model is covered in some detail in my book "In Pursuit of Foresight".]
For the purpose of this work, the organisations that I focus on are public bodies. This is because it is these that are of most interest to the general public. It is also important to note that I am not suggesting that there is not any learning taking place in such organisations for this would be wrong. One of my key assumptions is that, as individuals and organisations, we aspire to learn from our past experience. I believe that, at an individual level we learn and develop on a daily basis. However, at an organisational level, we are less successful. I justify this statement within the context of the rest of my work which examines why organisations fail. In the opening pages of my book "It should never happen again" (published in 2013), I listed a series of statements from previous inquiries expressing regret that some organisation had failed to learn some pertinent lesson from the past.
This regret continues. Recently, in his Phase 1 Report (Published October 2019) on the Grenfell Tower Fire, Sir Martin Moore-Bick stated his regret that the London Fire Brigade had failed to learn the lesson from the 2013 coroner's report following the 2009 Lakanal House fire in Southwark, South London. At the start of the Manchester Arena Inquiry in September 2020, Andrew Roussos, whose daughter was the attack's youngest victim, told the public inquiry that lessons “should have been learned and in place” after 9/11 and subsequent bombings in London (7 July 2005). The effectiveness of the UK Government's response to the COVID19 pandemic has shown that, as a nation, the UK has failed to learn all the key lessons from previous pandemics despite being rank second for preparedness in the Global Health Security Index published in October 2019.
So why did organisations "fail to learn"? Alternatively, do these failures raise a larger question about the process of learning? Might we draw the conclusion from these repeated failures that the process for learning from such events is itself flawed? Are our expectations of what can be learnt, also fundamentally flawed? It is this issue, along with the harm it can do to society, that I want to discuss in the blog that follows. At this point I would wish to emphasise that it is the process that I am examining. Where I use examples it is only to illustrate my point. I do not wish for these to be seen as an attack on any individual nor that I would wish to advocate for any particular alternative conclusions to be drawn. All I am attempting to do is to suggest that the process as currently practiced may be leading us to draw false lessons.
In my 2013 book "It should never happen again", I explored the type and quality of the recommendations produced by public inquires. This work identified the generally poor quality of many of the recommendations made. Their quality was judged to be poor as the action required was either ambiguous or they were often couched only in aspirational terms and they provided little that bridged the gap between action and results. This gap is nicely illustrated in the widely used cartoon "And a miracle occurs". This cartoon focuses on the gap in organisational activity design that links action to the result desired. The premise it suggests is that organisations are too focused on acting to ensure the action generates the results desired. We see inquiry reports that contain dozens of recommendations and dozens of actions with no proof that they will achieve the result desired.
There is a second problem and that is the approach inquiries take. It is clear from reading inquiry reports that the inquiry team tries to determine what went wrong (in fact what did not work perfectly) in a given circumstance and then make recommendations that would have, or should have, prevented the various errors that have been identified from reoccurring. The inquiry's focus is on the past. The clear assumption being made is that, if we learn how we might have prevented some occurrence in the past, it will help us prevent a similar event in the future. Unfortunately, this bedrock assumption of the inquiry process is false. Each event proves to be the coming together a myriad of factors that create a unique event. Inquiries fail to differentiate the unique and the general and therefore fail to draw out the general lessons that might help us prevent or react better to similar events from the unique events that occurred.
As part of their approach, inquiries tend see the issue that they are addressing in isolation rather than within a context. This makes them prone to making Type 2 errors. In terms of organisational failure a Type 1 error is, in simple terms, the incident that occurred due to a failure of safeguards. A Type 2 error is the failure of the organisation (often its bankruptcy) due to the weight of bureaucracy caused by have too many safeguards in place. I have not yet seen a report that discusses or even considers the potential unintended consequences of the action they propose.
The next issue I wish to raise is the poor application of hindsight by inquiry teams. In the case the Grenfell Tower Fire inquiry, Sir Martin Moore-Bick clearly identifies the danger of using hindsight and then his report goes on to do that very thing! [For more detail follow the link] Before they can make valid judgements, inquiry teams need to take great care to determine what was known at the time a decision was made, what could or should have been known and what was not clear until after the event. To conflate these three subsets will lead to false conclusions and therefore erroneous lessons being learnt.
The final, and perhaps most ironic, issue is that those conducting inquiries fail to learn from the conduct of previous inquiries and the other work that examined related subjects. A clear example is provided by the Terms of Reference for the Saunders Inquiry into the Manchester Arena Bombing. These Terms of Reference state that the Inquiry needs to consider "The adequacy of the (planning process detailed) above, including their compliance with relevant planning, preparation, policies, systems and practices." This type of requirement is common and it leads inquiries to list in detail each "failure to comply" and exhort that the organisation does better next time. The inquiry process, based on the legal process which, in turn, is based on a rule based system, fails to learn from the extensive body of research which explains the limitations of rule based, systems. These inquiries therefore recommend more rules again without considering the unintended consequences of doing so. There is clear evidence that these "new rules" are often factors in the next major organisation failure.
Public inquiries undoubtably serve an important social purpose. Amongst other things, they provide a forum for determining what happened; they allow victims to be heard; they provide a forum for investigating the application and compliance with the law and regulations and they provide a forum in which those in authority can be held to account. However, such forums see the world through the lens of rules and so tend to recommend more rules. This is despite extensive research that shows the verbatim following of rules is not a suitable way to assess the management of complex and dynamic situations. There is the question as to whether the average inquiry team has the necessary (what Vaughan calls) “seat of understanding” to comprehend what they are seeing and hearing. What Vaughan is questioning is whether the individuals have the training, knowledge, experience and current data required to make the appropriate judgements. More colourfully, Scott Snook refers to such instances as “pigs looking at watches” where he is referring to highly intelligent cultures not understanding what they are seeing. For an alternative and maybe more appropriate model for evaluating decisions within complex and dynamic situations, we might need to look towards the system for air accident investigations.
We already have enough evidence to strongly suggest that the public inquiry format does not lend itself to providing useful lessons for how we should handle complex and dynamic situations. Their recommendations are often, at best useless or at worst, dangerous; they constitute a gross waste of public resources not only in their conduct but also when organisations try to implement them.
To support this assertion, at Grenfell Analysis I provide an analysis of Sir Martin Moore-Bick's Phase 1 Report (Published October 2019) into the Grenfell Tower Fire. I provide this example as I was asked by a third party to conduct the review. [The client expressed no objections to me sharing the analysis more widely.] A copy of my analysis is available from this page where I also lay out, what to me are the key issues. This page therefore provides the starting point for the discussion on my blog.
I have now reconfigured my framework for the analysis of recommendations to provide a more repeatable and robust methodology. It is now a three part process and includes criteria for rating recommendations. I have also set-up the air-accident inquiry process as a benchmark.
For my analysis of the COVID pandemic response I will be using Disaster Incubation Theory as my analytical framework. In this context I am looking to use the Francis Report into the Mid Staffordshire NHS Trust as my 'Notionally Normal Starting Point" . To this end I have re-analysed the Francis reports published in 2010 and 2013. This analysis can be found here.
I have also added my analysis of the "independent review of the UK response to the 2009 influenza pandemic" produced by Deirdre Hine published in July 2010. This report is important because it sets the scene for the UK Government's 2011 Pandemic Strategy.
I am now in the process of analysing the recommendations produced intio the repsonse to the COVID19 pandemic. To date thes einclude
The Sixth Report of the Health and Social Care Committee and Third Report of the Science and Technology Committee of Session 2021–22 – Part 2 Assessment: "Coronavirus: lessons learned to date" produced on 12 Oct 21.
The People's COVID Inquiry chair my Michael Mansfield QC. Their report was produced on 1 Dec 21.
On 27 Nov 21 I reconstructed my methodology for analysing recommendations. I reversed the order of the steps to be taken. I now recommend that you first deconstruct each recommendation (Part 1) and then categorise them (Part 2). I am now in the process of updating the illustrations on this site to conform to this new methodology. However, as I may have missed some, please check whether the page you are looking at has been updated (that is, dated after 27 Nov 21).