top of page
Top

Analysing Recommendations

Nearly every public inquiry looks back at similar events in the past and laments that recommendations previously made had not been implemented. Time and again we hear the plea for some event to  "never happen again" but we see little effort to understand why they do.

 

Instead, it is lazily assumed that such failures are due to negligence or stupidity on the part of those who should have acted. Little thought is given to whether the quality of the recommendations had had any bearing on them not being implemented.

 

With the hindsight available to them, it will be clear to an inquiry team which recommendations should have been implemented and how they should have been implemented. This clarity is rarely available to those faced with having to implement them: in many cases the quality of the recommendations made is highly suspect. While there are some valuable recommendations, many are not. At their least harmful they are ambiguous; neither what is required nor the priority it should be given is obvious. At their most damaging, I have seen some recommendations that have actually caused the next crisis or disaster.

We therefore need some way to assess the quality of any set of recommendations made. I have studied many inquiry reports and have tried to assess the value of the recommendations offered. However, there are currently no tools available to help with this task. The aim of my research is to help develop some of these tools.

At this point I have to note that some forms of inquiry do successfully promote learning. In general terms, the more technical the inquiry and the greater the inquiry team's seat of understanding, the more successful they are. Here I would cite the world of air-crash investigation as a successful model.

 

The approach that I take is based on a basic premise of performance management. This is the assumption that the clearer and more explicit an instruction is, the more likely it is to be implemented successfully. As someone steeped in the study of failure, what I am looking for are the barriers to successful implementation that reduce the probability that a recommendation will be enacted as prescribed and will achieve the desired intent. The way that I structure my thinking on this risk of failure is described here.

I see three areas where work needs to be done when examining the formulation of public inquiry recommendations:

  • The first is to deconstruct the recommendations to clarify what is required.

  • The second is to categorise each recommendation to determine which parts of the overall system will be changed.

  • Final step is to rate each recommendation.

The first issue is the Quality of the Recommendations. To me quality comes down to how clearly the recommendations communicate the author's intent. There is clear need to transfer ideas from the inquiry team to those who have to interpret the recommendations so that they can implement them. Communication is never easy and this is shown by the frequent need to deconstruct a recommendation in order to determine who is to ask, what they are to do and what outcome is needed. This can be seen by the fact that the failure to communicate effectively is raised as an issue in nearly every inquiry report that I have read. It is therefore very important that the inquiry team makes their recommendations as clear as possible with no room for reinterpretation. It should be noted that even the slightest ambiguity in what is being recommended leaves scope for the implementation team to misinterpret them. Early in my training I was given a little ditty to remind me that communications is difficult even between willing partners. The ditty goes:

“What you thought you heard me say

Is not what I thought I said.

What I said was not what I intended to say

nor is it what I meant.”

I believe that recommendations must be clear and unambiguous if their intent is not to be watered down or lost when a practitioner is forced to interpret them.

 

Therefore in order to assess the technical quality of the recommendations, I see a need to deconstruct them: this forms the first part of my analysis. Here I will use standard thinking taken from the discipline of performance management. I describe a proposed analytical framework for deconstructing recommendations here.

The second part of my analysis is to examine how the inquiry team expected the world to work. In our every day life we make assumptions about what will happen and what effect a certain act will have. This has been referred to as a paradigm, lay theory or world view. While each has it own nuances, for the purpose of this discussion we will assume that they all refer to the same idea. How this is important when considering the construction of recommendations is considered below at Assumed Worldview. Based on a standardised worldview derived from earlier work, I then categorise each recommendation in order to determine where the recommendation will act on the system.

In the final part of the analysis, I look to rate each recommendation. My rating (validity) criteria are based on how likely it appears that the recommendation will (1) be enacted as stated, (2) achieved the immediate end (output) desired and (3) will achieve the overall effect (outcome) needed.

A point to Note: When I first started to work on devising this methodology, I was looking to categorise recommendations to try to produce a list of components that made up the mental model of an operational system held by the teams running the public inquiries. This work was done as part of my research for my first book "It should never happen again". When I subsequently started to consider how to judge the quality of the recommendations, I assumed this would form the second part of my methodology and I labelled them as such. As with many others, this assumption proved to be false!

As of 27 Nov 21, I reversed and relabelled the two parts. Part 1 is now the deconstruction of the recommendations to assess their validity and Part 2 is now their categorisation. While I have made an effort to update my illustrations so they match this change, some may have slipped through the net. Please be aware that some discrepancies may exist in pages that were last updated before this date.

Pt 1 Analysis

Assessing Recommendation Quality - Part 1

In order to assess whether a recommendation is likely to achieve it ends, we have to determine its quality. Now we have to ask the standard performance management question: "what does good look like?"

 

This is based on the logic that if a recommendations is to achieve its desired end it must be implemented successfully. Therefore the question we are asking is what are the characteristics of a recommendation that will make them more likely to be implemented successfully?

For an example of this type of analysis see the Hine Report, the House of Commons Committee and the Francis Report 2010 and the latest being Francis 2013

Thinking about Recommending Change

At times it would appear that those making recommendations on operational systems often forget that what they have been tasked to do is to offer their views on how the system should be changed to make it operate more effectively. To judge from the tone of many reports, rather than expecting these systems to work effectively, they do, in fact, expect them to work perfectly. In this light, if inquiry teams are to stop unwanted events from 'ever happening again', they must perfect the systems under examination. It can therefore be taken that, if inquiry teams are to achieve this remit, they must think that this will be done through the implementation of their recommendations.

Within this process, inquiry teams needs to recognise that making recommendations is a change management issue rather than a legal one. Their recommendations need to be worded accordingly. This line of thinking affects how I approach validating inquiry recommendations.

First and foremost, I believe that every recommendation must stand alone. I have read many recommendations that only make sense by taking a deep dive into the reports. Even then, the wording that inquiry reports used is often nuanced and therefore, in operational terms, is ambiguous.

 

I would suggest that unless the action and intent (outcome) of the action are clear, then it is highly likely that the recommendation will not achieve the desired end. Often the intent of a recommendation is implied rather than stated explicitly. When assessing a recommendation I do not credit it with a meaning that is only implied as this wording is open to the vagaries of personal interpretation. What is 'obvious' to one person may not be obvious to another. I therefore see implicit meaning as being a barrier to the successful implementation of a recommendation.

It is clear from the work that I have already done, recommendations written in the active voice are much easier to understand. One of the techniques that I have adopted therefore is to rewrite those written in the passive voice to an active one as this exposes any ambiguities in their construction. 

 

Another early finding is the need to consider the number of actions (or steps) that must to be taken to implement the recommendation and (once implemented), what it would take to ensure the desired outcome is achieved. In this case, each step should be considered as a barrier to success that will need to be dismantled. Through  consideration of this issue it becomes clear which recommendations will be hard to implement and less likely to result in the desired outcome. 

In the end, I would like to pose one question to those making recommendations: would they expect that any system could be perfected by implementing imperfect recommendations?

My proposed format is in the table below:

Considerations:

[1] Responsibilities being allocated in an ambiguous way (such as “Government”) should be avoided for, if there is any ambiguity in who is responsible for the action, there is a danger that the appropriate action will not be taken. Therefore, the responsibility should be placed upon a person, even if this is the head of the organisation (such as the Secretary of State.)

[2]  Shall, should & may shows confidence in the recommendation or arrogance of the person making the recommendation:

 

  • This should be judged by how the evidence is used to justify it.

  • If the recommendation is mandatory, it should include instruction of how the process is to ensure that the appropriate action is taken. 

  • The inquiry team should be held accountable for any unintended consequences that might emerge from the implementation of their recommendation.

[3] Here we need to consider how clearly the action required is articulated. Within my Seven Dimensions of Risk this would equate to R2 (Transformation).

Here we need to consider the validity of the recommendation based on specificity of the action required and how it relates to the result desired.

  • At its best a recommendation will be very clear as to what is required rather than being vague.

  • At their least useful they recommend further reviews, endorse some current action, just make a comment or admonish some person or action.

[4] Here we need to be concerned with how clearly the sub-system or process that needs to be changed has been identified.

 

[5] Here we need to consider whether the action proposed will ensure the outcome specified will be delivered.

 

Consideration also needs to be given to what other barriers stand in the way of success.

Within my Seven Dimensions of Risk this would equate to R3 (Results).

[6]   Measure each in terms of:

S - Specificity

M -Measurability

A - Attainability

R-  Realism

T - timely

S - Stretching (Stretch might not be required as completion will be a stretch anyway!)

N - Necessary

A - Achievable

P - Precise

[7]  Consideration needs to be given to whether the outcome desired is clearly stated, realistic and achievable or just an aspiration (a utopian goal)

  • Clear statement of the link from the action to the final desired outcome

  • Shows awareness of possible unintended consequences

Within my Seven Dimensions of Risk this would equate to R4 (Effects).

[8] Here we need to consider how the changes to the outcome will be seen in tangible terms.

 

[9] Any other comments not covered above.

Last Update: 05 Dec 21

Pt 2 Analysis

Assumed Worldview - Part 2

So, how do Inquiry teams view the world? Again, I must stress that my focus is only on the aspects of those inquiries that examined and commented on operational matters. I recognise that inquiries serve other purposes and I refrain from commenting on them.

After years of examining inquiry reports, I saw a clear pattern emerging. I describe this in more detail in my book "It should never happen again". All the inquiry reports that I have examined that seek to recommend how an operational system may be improved focus on what went wrong. They then recommend what they think needs to be done to ensure that it 'never happens again'. I have labelled this the Perfect World Paradigm

 

In-built to this approach are several assumptions:

  • The first is that the rest of the system under examination was working as designed.

  • The second is that any system can be perfected.

  • The third is that the recommendations made will perfect the system.

I would postulate that all these assumption are suspect.

Categorising recommendations

As I deconstructed over 300 recommendations in the Francis 2013 report, some issues with categorisation became clearer. When I originally did this exercise, I looked to categorise each according to that part of the system it hoped to change. However I found that I had set myself a surprisingly difficult task as this was not always clear. Looking at the table above, it is now clearer to me why I found it so hard. On reflection, I now know that what I was subconsciously looking to identify was those bits needing to be changed: to put it another way, what was to be transformed. However, what is now clear from the table detailing my Part 1 analysis of the Francis 2013 recommendations, was that what needed transformation was not always clear. Sometimes a recommendation did focus on the transformation process. In others however, the recommendation only articulated the output or the outcome required from implementing the recommendation.

As a result of this work I also updated my recommendation categorisation system in the table below. I have not changed any of the categories but I hope that I have clarified what I mean by each one.

Within the categorisation process there is therefore quite a lot of interpretation required. This may lead to inconsistencies in the way individual analysts categorise individual recommendations. While, from an academic point of view, this might be seen as being a weakness in the process, from a practical point of view it is not when we remember the overall purpose of the exercise. No change programme of this scale will be carried out by an individual. It will always involve teams. These teams will (hopefully) have a collection of views from which they need to develop  a cross-understanding of the problem. I see this analysis as providing the baseline for that discussion.

One last comment I will make is on the likelihood of such problems 'never happening again'. While Francis may have been attempting to perfect an imperfect system, he is extremely unlikely to meet this expectation. Even if all his recommendations were implemented as designed (in itself a highly unlikely outcome), he has still left at least 48 gaps. These can be seen in the issues that needed further reviews. This reinforces my proposition that if the ideal is to be practical, it cannot be perfection; Normal Chaos will persist and so my continuing quest to identify how we can manage these conditions more effectively. Inquiries that attempt to perfect our systems are therefore doomed to fail.

Pt 2a Categorisation
Pt 1 Criteria
Pt 2b Diagram

Basic Model

During the research for my book a clear pattern emerged of the component parts of operational systems. These are listed in the table above. From these I build a model of how these categories might be seen to interact. I refer to these links as 'interdependencies'.

From the work that I have done I would say that this is rarely clear. Logic would suggest that if such a system was to work perfectly then the links between each component would have to be clear and then optimised. What no inquiry team made clear in their report is what the links are and how they affected each other. It is clear from these reports that they see these links as linear and stable (a characteristic of a Perfect World) rather than complex and dynamic (a key feature of Normal Chaos).

 

Cube Model

The diagram, as presented here, is based on how I see these components might interact if the system was linear. This model can be seen as a baseline model of their worldview and is based around my 3-dimesional thinking expressed in the  Cube model.

While this model may appear to be complicated, it is a simplification of the real world. In practice the system will consist of multiples of this model all interlinked. To reinforce this point I offer Leveson's STAMP model as an illustration. To start to comprehend the true complexity of the system, you have to imagine one of my models at every level of Leveson's. The reality is there will be multiples of my model at each level. Opposite I provide an example using the Hine Report into the 2009 Flu Pandemic seen as a cube. Hine structures her recommendations to cover 6 areas: I have given each a layer.

While remaining alert to the implications of the cube, for the purpose of analysing inquiry recommendations, I have rolled all of these into one layer to simplify the initial process.

Recommendation Bias

Numbers or Percentage?

In my book "It should never happen again" I analysed 1130 recommendations. In the diagram opposite you can see the number of recommendations that fell into each category.

However, this does not seem to answer easily the question at the back of my mind which is "where have they focused their efforts?

I am therefore questioning whether the diagram should the number of recommendations (in red) or the percentage (in blue). Where a balanced average would be 3.8%, the top five can be seen to be those calling for:

  • technical changes to procedures: 12.8%

  • future reviews: 11.2%

  • new equipment and services: 10.1%

  • new structures: 9.8%

  • a clearer division of responsibilities: 8.2%.

Whether the process of analysis should use numbers or percentages will be a question that needs to be resolved later in this research.

Analytical Process Part 2

Therefore, the Second part on my analytical process is to assess the recommendations in order to try to determine the inquiry team's focus against my Perfect World model. This work consists of two steps.

The first step (Part2a) is to identify the category into which each recommendation falls. This exercise is useful in a number of ways:

 

  • It helps to clarify the primary and secondary purposes of the recommendation. For example the primary purpose may be to recommend that a review is conducted with the secondary purpose of, say, defining a procedure.

  • Where a recommendation is ambiguous (where the primary purpose is not clear), it will stimulate a debate about what is actually being required.

  • Where the recommendation is multi-faceted, this is also brought to the fore. Here I will take a House of Commons Committee recommendation as my example. The recommendation states

"Protocols should be established to allow the Armed Forces quickly and at scale to participate, and the NHS should consider ways in which it can be more accommodating of volunteer support in normal times building on the experience and enthusiasm demonstrated during the pandemic."

This recommendation requires two separate actions. The first requires "Protocols should be established to allow the Armed Forces quickly and at scale to participate". This comes under Planning 3 category of action. The second action is that "the NHS should consider ways in which it can be more accommodating of volunteer support in normal times". This requires a review and so needs to be categorised as such.

Once, by using the table set out above, I have determined which category each recommendation falls into, I then, as my second step (Part 2b), map these out on the diagram above: my examples come from Francis 2010, Francis 2013, Kerslake, Hine Report, The House of Commons Joint Committee and Moore Bick. This then offers me:

  • A visual representation on where the inquiry has focused.

  • In the areas that are not annotated I am forced to ask whether this part of the process is working perfectly or whether it was just overlooked by the inquiry team.

  • Where there are many changes to the system, I have to ask what unintended unwanted consequences might emerge as the changes interact.

 

Cube

Rating Criteria

My rating (validity) criteria are based on how likely it appears that the recommendation will (1) be enacted as stated, (2) achieve the immediate end (output) desired and (3) will achieve the overall effect (outcome) needed.

I have now rated the recommendations produced by the House of Commons Committee and the People's COVID Inquiry. I have used these exercises to refine the rating criteria.

I have now updated the rating criteria: I set out my thinking behind these criteria here.

 

This rating system is designed to indicate the operational validity of the recommendation. That is, its projected ability (leading indicator) to change the system in a way that will make it more likely to operate in the way needed to deliver the end state desired in an effective and efficient manner.   Operational validity is therefore rated more highly than aspirations (those that only articulate the desire for a new end state) and politics (that they are seek to gain political advantage from the situation).

 

The rating system also considers whether the remit set by the recommendation can be objectively satisfied whether in the short, medium or long term.

  • Excellent (probability of successful implementation) … full, detailed and unambiguous. It shows a full understanding of what is wanted (the objective), how it is to be achieved (the action), what it will cost, how success will be measured, who is to act and how this action will contribute to the outcome required. The rationale for the action is clear as are the key decisions needed, the potential anti-goals are understood and constraints have been considered.

 

  • Good (probability of successful implementation) … the change activity is linked to a measurable outcome making "what success looks like" very clear. The rationale, anti-goals and constraints have been considered.

 

  • Fair (probability of successful implementation) … the change activity is linked to a clear outcome and some barriers have been addressed thus increasing the probability of success.

 

When the recommendation is a Further Review, it would be considered to be Fair if it makes clear what issues of feasibility is being examined.

 

  • Weak (probability of successful implementation) … The objective of the change is clear but how the objective is to be achieved is not nor are what barriers to success exist.

 

An example of this is the request for funding for a specific output which is not however linked to a specific desired outcome.

  • Poor (probability of successful implementation) … while the change recommended is clearly stated, what success looks like is highly ambiguous. Little else has been considered.

Examples of this may be a vague call for funding for a generalised outcome or a review of some overarching facet of the system. As such it is too open ended to take any action to satisfy the remit.

  • Failed … where neither the action to be taken nor the output/ outcome required is clear or it is clear that the action proposed will not achieve the ends desired.

 

  • Non … where it does not propose an alternative action. These come in the forms of a comment, an endorsement of a  current action or an admonishment

  • Bad … potentially harmful: implementing the recommendation may lead to more harm than good. They overlook potential unintended consequences that outweigh the benefits foreseen.

Visualisation of Results

Once the recommendations have been scored, I would look to benchmark them against other inquiries. As inquiries will produce a different number of recommendations, the only way that I can see to compare them is to look at the percentage of recommendations that fall into each category. This number is then displayed in a graph. I provide two examples below. So who, of the two, produced a better set of recommendations? That is for you to decide. I would say again, my work is designed to catalyses debate of such matters.

Miracles

A recurring theme within my operational failure research was the idea that these organisations relied on miracles to turn their activity into a successful outcome. While each part of the organisation focused on its processes, there was surprisingly little effort put into ensuring that all activity produced the result desired. In failing organisations there was often a gap where they relied on a miracle to bridge it.

What am I trying to achieve through rating recommendations?

 

The reason that I am looking to rate recommendations is to indicate whether they are likely to have the effect desired. In terms of performance management, these ratings would constitute 'leading indicators'. In this context these ratings offer an indication of whether the recommendation (as written), is likely to produce the desired result (or 'end state') in, what I may remind you is, a dynamic system. While not seeking to question the honourable intent of any inquiry team, I have, once again, to point out the tensions that arise from a politically motivated inquiry producing practical orientated improvements. However, before we can address that issue, I need to discuss the constructs of "end state" and "likely" in more detail.

 

In this context, the term 'end state' does not refer to a single state at a fixed moment in time. In a dynamic system such as the ones I am talking about, the term should be seen as referring to something more complex. Therefore, as we deconstruct this idea further, we need to use a single dimension as our point of reference. In this case, I will make this 'time' and so I will look to the 'state of the system' at different moments in time. For ease of illustrating my point, I will label these as short, medium and long term. Let's now look how this manifests itself in practical terms.

 

Within this analysis of recommendations, I take the short term to be the immediate effect of the change proposed (that is the effect of the act of transformation).  The medium term would be the immediate consequential effect that the particular subsystem produces as its input to (influence on) the rest of the system. The long term would consider whether the changed output from that particular sub-system makes the desired change to the system outcome more likely. In practice however there is likely to be more than these three steps required. Sometime the steps required are clear and sometimes they are not. I refer to the difference in the number of steps required as the non-equidistance of recommendations.

 

Likelihood is about probability that a recommendation will produce the outcome desired. A basic tenet of risk management concerns the identification of the barriers to a successful outcome and then overcoming them. Therefore, when looking at recommendations, 'likelihood' is about considering what will prevent the desired end state emerging. While some of these barriers are obvious, others are not. Where the steps necessary are obvious, the barriers can be identified and managed. Where the steps have not been identified then the barriers will remain hidden and unmanaged. An assumption that I have therefore adopted is that where the steps required to implement a recommendation are only implied, there is less likelihood of the desired outcome being obtained.

 

As I have stated when developing my framework for deconstructing recommendations, clarity of the action required is assumed to be very important. Therefore, in line with a basic assumption taken from performance management, my framework tries to clarify [1] who is responsible for delivering the change, [2] the imperative for the change, [3] what action is needed, [4] what entity is the target of the action, [5] what new output is desired, [6] how this change will manifest, [7] how the overall system will change (outcome) and [8] how this will manifest. This approach to achieving what is intended is action orientated. The psychologist Gary Klein offers us a supplementary way of analysing this communication of intent.

 

In work produced in 1984, Gary Klein offers what might be seen to be a stepped process designed to improve the communication of intent. This is seen to be appropriate here as a recommendation can also be seen to a communication of intent. The Klein process has seven steps. The first four are relatively straightforward. The final three will take inquiry teams outside of their current comfort zone as they relate to the practicality of implementing the intent.

 

  • The first step is to be clear about the purpose of the task (that is to set the higher-level goals). In terms that I am using, I see this as equating to the outcome.

 

  • The second step is to be clear about the objective of the task (he talks of producing "an image of the desired outcome"). Here he does not say 'higher-level goal' so I see this as being, in the terminology that I am using, the output.

  • Next, he talks about 'the sequence of steps in the plan'. To me this is about understanding and establishing the 'distance' (see non-equidistance) between what we have now and what we desire the state of the system to be and avoiding having to rely on a miracle.

  • The fourth step Klein suggests is the need to articulate the rationale for the plan. In the military (who Klein was advising at the time he produced this work) they talk of 'expressing the commander's intent'. This is so that when the plan falls apart, for it is recognised that it will, his subordinates are clear about the outcome he wants to achieve and how he sees it being achieved. With this in mind, his subordinates can adapt their efforts (bricolage) to help him achieve his intent.  It should be noted that in the past it was, with rare exceptions, a 'he'. (Within the Normal Chaos framework this comes under the idea of 'self-organise'.) In performance management this intent led approach is common and is labelled 'management by objectives'. This practice could easily be applied to inquiry recommendations as the team have already done the work to make this possible. They have explained what when wrong and what needs to be done however this detail is rarely clearly linked to the recommendations. To achieve the step, all the inquiry team would need to do is to restructure how they write their reports. In the revised structure each recommendation would be supported by the rationale behind it. Currently this rationale is lost within the overall narrative.

  • The fifth step Klein suggests is to articulate the key decisions that will have to be made while implementing the intent. In terms of the recommendations made, it would require those making them to start to think through what was really involved. Here we see the difference between the politically orientated recommendations which are about inspiring grand visions rather than practical recommendations that focus on operational effects.

 

  • Klein's sixth step is to consider Anti-goals; he sees these as being unwanted outcomes. In terms of recommendations, these unwanted outcomes come in two flavours. The first are 'barriers to success' and the second being 'unintended consequences'. These seem to be rarely considered by those making recommendations yet both have the potential to make implementing recommendations at least as harmful to the system as the original problem. This is linked closely to Klein's final step.

 

  • Klein's final step is to consider Constraints and other issues that might make the item (in our case 'recommendation') unworkable. This problem can often arise out of the methodology used by inquiries. Their review process seeks to identify what they don't like (what they consider as being mistakes) and to offer a remedy. Unfortunately, they often fail to understand why the 'error' occurred despite there being a wide range of academic literature recommending this approach. In turn, this leads to flawed recommendations that fail to appreciate the context (including constraints) within which the organisation and the individuals are forced to operate.  In terms of the Normal Chaos Framework, we are talking about understanding the forces that shape the system (the attractors) and the energy driving the dynamics. Only if these align with the changes recommended, is there any likelihood of the change having the effect desired.

Summary of the Process

 

Klein's stepped process can be seen to give us some criteria that can be used to rate recommendations. While the judgement may be subjective at present, it would be desirable for them to become objective in the long run. These subjective criteria do however provide us with a good start.

 

Part 1 of this analytical process enables us to deconstruct each recommendation to make clear its purpose (outcome) and objective (output). It also gives us some idea of how clearly the steps necessary have been considered. Unfortunately, the way inquiry reports are currently structured, the rationale for the changes proposed is often lost within their narrative. This is likely to remain the case in the immediate future and will therefore continue to offer hidden barriers to success that need to be acknowledged within this rating system.

 

Part 2 of this analytical process enables us to see the focus of the inquiry team. It helps us ask questions about their priorities and biases.

 

Part 3 of the process is where the recommendations are rated for their operational validity. From Part 1 we can see whether the recommendations are operational or political. Operationally orientated recommendations tend to focus on the change required (as set out in the framework) whereas political ones tend to be more aspirational. In general, the politically orientated recommendations can be seen to have many more implied steps where the barriers to success are more difficult to identify therefore making their successful implementation more unlikely. As such, political recommendations tend to be seen as being weak in terms of precipitating the necessary operational change. They rely on a different power dynamic: it is one that exerts pressure on a certain person to be seen to act. As a consequence, the outcome tends to be one where the recommendation is, in the short-term, seen to have been implemented but where it has little long-term effect. (An example from the day I wrote this text was the inquiry set up to examine the death of Arthur Labinjo-Hughes which had, on the surface, striking similarities to the Victoria Climbie case that resulted in the 2004 Laming Report. This must raise questions over what had been learnt from this previous case, whether the recommendations had been implemented and, if so whether they also had become subject to practical drift.) In addition to operational considerations, recommendations need to be reviewed for their approach to understanding the key decisions that need to be made, possible anti-goals and constraints. These consideration are embedded in the rating scheme set out above.

Pt 3 Criteria
Reasoning behind Rating Criteria
Miracle
Summary of the Process
Visualisation of Results

Last Update: 13 Jan 22

bottom of page