top of page
Top

Explaining My Terms

Black Swan

Black Swan

The idea was developed by Nicholas Taleb to explain:

 

  • The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.

  • The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities).

  • The psychological biases that blind people, both individually and collectively, to uncertainty and to a rare event's massive role in historical affairs.

 

This is also about where incomplete empirical data leads to the wrong conclusion being drawn

Bricolage

 

Weick (1993) talks of it meaning to be creative under pressure ... precisely because they routinely acting chaotic conditions and pull order out of them -"War is chaos and therefore so should be the training!")


Weick, K. E. (1993), "The collapse of sensemaking in organizations: The Mann Gulch disaster", Administrative Science Quarterly, Vol.38,  No.4, pp.628-652.

 

Van de Ven, A.H. and Johnson, P.E. (2006:806) say that both practitioners and scientists engage in what Levi- Strauss (1966) termed bricolage, improvising with a mixed bag of tools and tacit knowledge to adapt to the task at hand.
 

Bricolage

Constructive Pessimism

 

Whose boss has not beseeched them to "be more positive"? This tendency to "always look on the bright side" is referred to as optimism bias. The problem with optimism bias is that managers become less mindful of the risks within their enterprise and therefore what might go wrong.  The consequence of this limited oversight is that  they can be caught out by the unexpected and they often are!

 

To prevent being caught out this way, managers need not only to think about what might go wrong but also to prepare how they might response. The response preparation may either be done in terms of a specific contingency plan, the provision of reserve capability or just enabling mental preparing and be clear about what needs to be done.

 

In addition, constructive pessimism is about managers  being every watchful for signs of unforeseen problems emerging from within their organisation or some by some external factor. This approach is central to the mindfulness espoused by proponents of "high reliability".

 

While the expectation of failure might to some be seen as being pessimistic, it does brings advantages and can therefore be seen as being constructive. This approach, such as encapsulated in the "assume breach" mantra, is seen to be  constructive as it help to promote the illusive phenomenon of foresight.

Constructive Pessimsim
Cross-Understanding

Cross-Understanding

Cross-understanding is a group construct were each member of the group has an understanding of each other member’s mental model. The extent to which group members have an accurate understanding of one another’s mental models evolves through intermember communications … (of)  members’ factual knowledge, cause-effect beliefs, sensitivity to the relevance of particular issues, or preferences.

Huber, G.P. and Lewis, K. (2010), “Cross-Understanding: Implications For Group Cognition And Performance”. Academy of Management Review, Vol.35, No.1, pp.6-26.

Efficiency Thoroughness Trade-off (ETTO)

 

Eric Hollnagel states that "The ETTO principle refers to the fact that people (and organisations) as part of their activities frequently - or always - have to make a trade-off between the resources (time and effort) they spend on preparing and activity and the resources (time and effort) they spend on doing it."

 

He goes on to say "The trade-off may favour thoroughness over efficiency if safety and quality are dominant concerns, and efficiency over thoroughness if throughput and output are the dominant concern." (this throughput concern has been referred to as Production Pressure.) Here you can see the mechanism that causes organisations to swing between Type 1 and Type 2 errors.

Efficiency Thoroughness Trade-off (ETTO)

Error-Inducing Organisation

 

In the 1999 edition of his book "Normal Accidents", Charles Perrow explores the idea of an 'error- inducing organisation'. He see these organisations have a number of clear characteristics.  These include:

  • An authoritarian organisational structure,  where power has been centralised, giving one person has  unquestioned, absolute authority over a system.

  • Where there are factional group and power interests.

 

  • Where the latency period of issues may be longer than any decision-maker's career': that is, where short-term personal interested trump the interests of the organisations.

 

  • Where there are ambiguous mental models of the mechanisms that rule the organisations; these enable an inaccurate mental model to be created leading to inappropriate decision making.

  • Where there are dysfunctional systems and processes.

  • Where “forced errors” occur – [for example where workers are forced to “do wrong (break rules) or be sacked”].

  • Where “prescribed” behaviours are hard to enforce.

  • Where the system that does not breed cooperation; elsewhere this is referred to as “Pseudo Teams”.

  • Where there are overriding economic and other pressure to perform: elsewhere this is referred to as “Production Pressure”.

  • Where failure appears to be continuous, but recovery is still possible.

  • Where the organisation relies on complex (or even complicated systems and equipment which is barely maintained.

  • Where blame is transferred outwards from centre.

 

What is depressing is that many of these characteristic can be seen within most organisations. For many reasons explored elsewhere, (see illusions), there problem is often caused by the organisations failure to acknowledge that they have these problems. One example of this can be seen on the 2021 O'Loan Report into the UK's Metropolitan Police Force. She stated 'Concealing or denying failings for the sake of an organisation's public image is dishonesty on the part of the organisation for reputational benefit, and constitutes a form of institutional corruption.' 

Error-Inducing Organisations

Error Typology

 

  • Type 1 error … They are the events that are seen or organisational crises or disasters.

  • Type 2 error ... this is when so much effort is put into mitigating risks that the bureaucracy severely impedes the organisation's operations. At it most severe, this type of error can cause the organisations to bankrupt itself.

  • Error of the 3rd Kind  (Mitroff) ... this is when the right action (a prescribed routine) is taken at the wrong time, in the wrong circumstances or on the wrong objective.

Error Typology
Emergent Event

​Emergent events

 

… Events that appear as the result of the interaction within a system. The ones often labelled emergent are rarer and often unexpected.

when an entity is observed to have properties its parts do not have on their own, properties or behaviours which emerge only when the parts interact in a wider whole … being more than the sum of its parts

Failure of Foresight

Failure of Foresight

My preoccupation is Failure of Foresight or, more accurately, it avoidance.

Turner (1976b) described “Failure of Foresight” (a phrase that he acknowledged was borrowed from another – Wilensky) as:

 

The collapse of precautions that had hitherto been regarded culturally as adequate … some large-scale disasters that are potentially foreseeable and potentially avoidable, and that, at the same time, are sufficiently unexpected and sufficiently disruptive to provoke a cultural reassessment of the artefacts and precautions available to prevent such occurrences. (1976b:380)

 

In accounting for failures of foresight, undesirable events known about in advance but which were unavoidable with the resources available can be disregarded. In addition, little time need be spent on catastrophes that were completely unpredictable. Neither of these categories present problems of explanation. In the former case, because of lack of resources, no action was possible. In the latter, no action could have been taken because of a total lack of information or intelligence. (1976b:380)

‘accumulation of an unnoticed set of events which are at odds with the accepted beliefs about hazards and the norms for their avoidance’ (1976b:381)

 

 Therefore, to me, Foresight is the avoidance of this failure.

Turner, B.A. (1976a), “The Development of Disaster – A sequence model for the analysis of the origins of disaster”, Sociological Review, Vol.24, pp.754-774.

Turner, B.A. (1976b), “The Organizational and Interorganizational Development of Disasters”, Administrative Science Quarterly, Vol.21, pp.378-397.

Turner, B.A. (1978), Man-made Disaster, Wykeham, London.

Turner, B.A. (1994), “Causes of disaster: sloppy management”, British Journal of Management, Vol.5, pp.215-219.

Turner, B.A. and Pidgeon, N. (1997), Man-made Disasters, Butterworth-Heinemann, London.

Forcefield Analysis - Kurt Lewin’s model

 

Sociologist Kurt Lewin developed a 'force field analysis' model (1951) which describes any current level of performance or being as a state of equilibrium between the driving forces that encourage upward movement and the restraining forces that discourage it. Essentially this means that a current equilibrium exists because the forces acting for change are balanced by the forces acting against change.

 

  • The driving forces are (usually) positive, reasonable, logical, conscious and economic.

 

  • The restraining forces are (usually) negative, emotional, illogical, unconscious and social/psychological.

 

Both sets of forces are very real and need to be taken into account when dealing with change, or managing change, or reacting to change.

In force field analysis change, is characterised as a state of imbalance between driving forces (e.g. new personnel, changing markets, new technology) and restraining forces (e.g. individuals' fear of failure, organisational inertia). In the case of crisis the change can be seen as being involuntary. To achieve change towards a goal or vision three steps are required:

  • First, an organisation has to unfreeze the driving and restraining forces that hold it in a state of quasi-equilibrium.

  • Second, an imbalance is introduced to the forces to enable the change to take place. This can be achieved by increasing the drivers, reducing the restraints or both .

 

  • Third, once the change is complete the forces are brought back into quasi-equilibrium and re-frozen.

 

Thomas (1985) explained that although force field analysis has been used in various contexts it was rarely applied to strategy. He also suggested that force field analysis could provide new insights into the evaluation and implementation of corporate strategies.

Forcefield Analysis

Groupthink

 

In his book Victims of Groupthink, Janis (1972, 1982) argued that extreme pressures for unanimity can build in a cohesive group that confronts serious threats (high stress) and lacks norms of deliberative decision making. These pressures cause decision makers to censor any misgivings they may have, ignore outside information, and overestimate the group’s chances of success.

“members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.” … Groups that get along too well don’t question assumptions or confront uncomfortable facts.

He sees some of the features of Groupthink as being:

  • Explore only a few alternatives

  • Illusion of invulnerability

  • Illusion of unanimity

  • Suppression of personal doubts

  • self--appointed mind-guard.

This links to my use of illusions within my Normal Chaos Framework

Groupthink

Fantasy documents 

... documents that (1) are rarely tested against reality and (2) draw from a quite unrealistic view or model of the organisation.  The fantasy is that everything will work right first time, that every contingency is known and prepared for (as in the Perfect World Paradigm).

This term is taken from an Articled by Lee Clarke &  Charles Perrow (1996:1041) "Prosaic organizational failure". The Article focuses on the often symbolic nature of organisational planning.

Fantasy documents

"Hail Mary"

 

This usage of the term comes from American football. A Hail Mary pass is a very long forward pass, typically made in desperation, with an exceptionally small chance of achieving a completion. Due to the difficulty of a completion with this pass, it makes reference to the Catholic "Hail Mary" prayer for divine help.

I usage this term to cover any widely move which has a very low probability of achieving the outcome desired.

Hail Mary

Induction fallacy

 

…  Started with Hume: familiarity of evolving knowledge blinds you to the truth … often as a result of incomplete empirical data … for example, when you see only white swans you think that all Swan are white.

Induction Fallacy
Liability of Newness

Liability of Newness

 

There is often little appreciation of the time and efforts it will be needed to implement even the apparently most simple recommendations… "liability of newness" [Stinchcombe,1965] is a terms embraces all the practical barriers that stand between somethings introduction and its successful implementation and the unwanted unintended consequences the create.

Mental Models

 

We all make assumptions about how the world work and we use these assumptions to guide us through our daily life. This model includes assumptions about what exists, how these phenomena interact (interdependencies) and what the result of these interactions will be. We use these assumptions as part of the Efficiency Thoroughness Trade-off necessary within every decision we make: they are part of the shorts cuts (heuristic) we use.

 

Mental models can be seen as being our theory of how the world works. These theories will have been formulated using a range of different methodologies. Some of these methodologies are very deliberate (such as academic theorising)  and others are based on the subjective accumulation of experience. This later group are generally referred to as 'lay theory': this with include myths, folklore and other forms of socially accumulated knowledge.

 

These theories, no matter how these are derived, will contain a mixture of accurate assumptions and false ones. These means that some of the results of phenomena interaction will turn out as expected and some will not. This is the basis on which learning through trial and error works.

 

If we are to learn from experience, it is necessary to have a clear understanding of the assumptions made that make up our mental model so that they can be validate as we go.

Mental Models
Muddling Through

Muddling Through

 … 1959 Charles E. Lindblom …

P.81: ”The method of successive limited comparisons” …

 

  1. Selection of value goals and empirical analysis of the needed action are not distinct from one another but are closely intertwined.

  2. Since means and ends are not distinct, means-end analysis is often inappropriate or limited.

  3. The test of a ”good” policy is typically that various analysis find themselves directly agreeing on a policy (without their agreeing that it is the most appropriate means to an agreed objectives).

  4. Analysis is drastically limited:

    1. Important possible outcomes are neglected.

    2. Important alternative potential policies are neglected.

    3. Important affected values are neglected.

  5. A succession of comparisons greatly reduces or eliminated reliance on theory.

Non-equidistance

 

'Distance' refers to the number of steps need to implement a recommendation and then the number of steps that are needed to ensure that the change designated in a recommendation achieves the outcome desired. As each step also offers up potential barriers to success, the more steps required the less likely it is that the desired outcome will result from the change specified.

 

Some recommendations can be seen to have only a few well defined steps while others are seen to have many ill-defined steps. The first is seen is being more likely to achieve the desired end state than the second. I refer to this as the two recommendations as having 'non-equidistance'.

Non-equidistance

Normal Chaos

 

 It is the recognition that the world in which we operate is an open system where all our plans are liable to disruption by both internal and external factors. It recognises that, while we may try to isolate our systems and processes from external factors, they are always liable to the effect and disruption of such influences. The Nobel winning physicist Ilya Prigogine called these spaces "islands of order in a sea of disorder". Normal Chaos recognises that as our systems and processes become more complex, they are likely to produce unexpected results; they produce patterns of activity that the inexperienced are likely to see as being chaotic (disordered) while the more experienced practitioners are more likely to see the patterns in play. One of the features of such systems is that they produce a regular outcome multiple times through the mechanism of dynamic stability: these repeated results can be mistaken for the system being stable. This means that when they unexpectedly produce a different result, this takes the people affected by surprise: these are often seen as Black Swan events while in fact that are just emergent events to which people have been blinded by the induction fallacy. And so, we start a new way of thinking.

Normal Chaos

Normalisation of Deviance

 acceptance of events that are not supposed to happen

Diane Vaughan’s book:”The Challenger  Launch  Decision: Risky Technology, Culture and Deviance at NASA”, The University of Chicago Press (1996)
 

(Vaughan, 1996:400): the stability of the scientific paradigms and their resistance to change... to  understand how workers construct risk, we opened up the black box of scientific practice, showing  the  ambiguity, the conflicting standards, the negotiations, and the disagreements about the nature of   evidence  and its meaning.


(Vaughan, 1996:401): Kuhn identifies "normal science" has a 'resolution of anomalies', the  'representation of chaos in an orderly fashion'... "persistence of paradigms" ...


First, a paradigms endures because its more successful than its competitors ...
second the paradigms determines what is a problem and for rules and methods are used to  examine it  (framing) ... these rules limit both the nature of the acceptable solutions and the steps by  which they are  obtained (epistemology and methodology) ...
third specialisations narrow worldview, creating rigidity and resistance to paradigms change.


[p.402]   that the success of a paradigms revolution can depend not only on converting tacit  knowledge  that is subjective and intuitive into a form that is acceptable, under the prevailing  rules of the game, but  also - like most successful revolutions - on power.

Normalisation of Deviance

One-Reason Decision-making

 

This heuristic relies on just a single cue to make a decision  (Gigerenzer and  Goldstein, 1996 and Gigerenzer and Goldstein, 1999).

 

“One-reason decision making” is a label for a class of fast and frugal heuristics that base decisions on only one reason. These heuristics do not attempt to optimally fit parameters to a given environment; rather, they have simple structural features and “bet” that the environment will fit them.

 

Decision-making theories typically assume that all relevant factors (reasons) are considered during the process that leads to the final decision. Yet we often have to make decisions that include a lot of unknowns. In these situations, we often rely on only one cue as this is allows for fast, frugal, and accurate decisions: this fit the ETTO principle of decision making.

One-Reason Decision-Making

Optimism Bias

The tendency to be over-optimistic, overestimating the likelihood of favorable and pleasing outcomes being achieved.

This is related to Wishful Thinking  that is People's preferences for future outcomes affect their assessment of the events". (Hogarth 1987)] and Pro-innovation bias (the tendency to have an excessive optimism towards an invention or innovation's usefulness throughout society, while often failing to identify its limitations and weaknesses.)

Optimism Bias

Overton Window

The American policy analyst Joseph P. Overton, suggested that the political acceptability of an idea depends on whether it falls within this given spectrum set by public opinion at the time. This window can be manipulated to stifle debate.

I use the term to apply also to the spectrum of opinions acceptable within an organisations culture.

Another term used with literature on organisational failure is "taboo subjects".

Overton Window
Perfect World Paradigm

Perfect World Paradigm

The basic proposition of the Perfect World paradigm is that if we recruit the perfect people, produce perfect plans, train them perfectly, supply them with exactly the right resources (including perfect unambiguous information) and execute the resulting plan flawlessly (eliminating all slips and lapses) then the desired outcome will be delivered. Within this paradigm is the belief that individuals should be able to learn, retain and use the knowledge they require perfectly. All of this perfection is then supported by having perfect foresight leading to individuals being blamed and punished where they fail to achieve these standards. Embedded within this construct is the desire to remove uncertainty and to control the world around us. The label Perfect World paradigm is used to reflect the phrase often heard when discussing failure; that is “but in a perfect world …”. At this point we need to ask the question as to whether this perfect paradigm could ever hold true. My answer would be both "yes" and "no"! I see the dividing line between yes and no coming down to the granularity (in Normal Chaos terminology "scale") of the criteria used to judge perfection and the paradigm’s practical utility

Performance Indicators

 

  • Leading

 

At its simplest, a leading indicator suggests whether you are on trans to achieve some output or outcome.

It provides you with the opportunity to adjust you activity since they are forward-thinking insights and predictions. 

 

  • Lagging

Lagging indicator report what has happened.

Performance Indicators

Practical Drift

“the slow steady uncoupling of local practice from written procedure” (2000:24); Locally efficient  procedures acquired through practice gain legitimised through unremarkable repetition (Snook,  2000:184)

Also see: Normalisation of Deviance
 

Practical Drift

Practical Utility

Corley and Gioia, (2011) discussed what constitutes a theoretical contribution and the utility of academic work. They distilled existing literature on theoretical contribution and divided it into two dimensions. These are originality and utility.

They break utility into practical and scientific, the later being of use to academics.

Practical utility is seen as arising when theory can be directly applied to the problems practicing managers and other organizational practitioners face… through “the observation of real-life phenomena, not from ‘scholars struggling to find holes in the literature’”… such a practical problem focus is a good way to develop theory per se. Thus, theory directed at practical importance would focus on prescriptions for structuring and organizing around a phenomenon and less on how science can further delineate or understand the phenomenon.

Practical Utility

Production Pressure

 

Production pressure occurs when there is an imbalance between production and safety. This can occur when leadership overly values production, such that the emphasis is placed upon meeting the work demands, schedule or budget, rather than working safely.

In 1993 Eric Hollnagel talked of "a drift towards failure as defences eroded in the face of production pressure".

Barry Turner (1994:216) talks of the "pressure to sustain production".

Charles Perrow (1999:379) says "they are questions of technique and management, and they are questions of  humanising work and finding ways to make the drive for efficiency compatible with safety and culture. But  they rarely questioned how important efficiency goals should be in risky systems, and who has the  power to impose the goals, and their contribution to production pressure".

In her 2000 book, Diane Vaughan attributed the loss of the Space Shuttle Challenger to Production Pressure.

Production Pressure

Red Queen Effect

 

This terms comes from the 'Through the Looking Glass' and refers to "running as fast as you can just to stay in the same place". This has come to refer to the energy needed to maintain the system in a condition of dynamic stability.

Red Queen Effect

Requisite Variety

 

The ‘law of requisite variety is an important principle in cybernetics.  Simply  stated ‘only variety can control variety’ (Ashby, 1956:207) and that "variety absorbs variety, defines the minimum number of states necessary for a controller to control a system of a given number of states."

Requisite Variety

Risk Discourse

Ortwin Renn sees risk discourse as coming in three varieties:

  • The first is a discourse of based on fear.

  • The second is a discourse based on acceptance where the conversation is based on the opportunities that arise from taking the risk.

  • The third and final discourse identified by Renn is a design discourse. He sees this as being a form of deliberation for defining and specifying the most appropriate route for assessing and managing a given risk. (This is what I use the term to mean.)

 

[Here we need to differentiate between discourse and advocacy. In this context I would use the term discourse to mean the exchange thoughts and ideas [i.e. having a conversation] design to establish the 'truth'.  On the other hand, advocacy on the other hand is about winning an argument (convincing other to adopt you point of view): the downside of advocacy is that it does not lead to cross-understanding.]

Renn, O., (2008:65-66), Risk Governance, earthcsan.

Risk Discourse

Seat of Understanding

 

Diane Vaughan provides the phrase which I have defined (based on work by Klein) to mean "having the training, knowledge, experience and current data required to make the appropriate judgements."

Scott Snook provides a much more colourful phrase to describe those who lack this level of understanding. He used the phrase "pigs looking a watches". This alludes to intelligent creatures not knowing what they are looking at.

Seat of Understanding
Utility of Knowledge

Utility of Knowledge 

 

... as described In Corley, K.G and Gioia, D.A. (2011), “Building Theory About  Theory Building: What Constitutes a Theoretical Contribution?”,  Academy of Management Review, Vol.36, No.1, pp.12-32.

 

  • Practical utility arises when theory can be directly applied to the problems practising managers and other organizational practitioners face [goes back to Kurt Lewin] … through “the observation of real-life phenomena, not from ‘scholars struggling to find holes in the literature’”…  such a practical problem focus is a good way to develop theory per se. Thus, theory directed at practical importance would focus on prescriptions for structuring and organizing around a phenomenon and less on how science can further delineate or understand the phenomenon.

 

  • Scientific utility is perceived as an advance that improves conceptual rigor or the specificity of an idea and/or enhances its potential to be operationalized and tested… Theory can advance science by providing cohesion, efficiency, and structure to our research questions and design… In a very practical sense, good theory helps identify what factors should be studied and how and why they are related.

"Seven Whys"

(Sometime framed as '5 whys')

 

This is a technique used to determine the cause-and-effect relationships that lead to a failure.  It purpose is to determine the root cause of a problem by repeatedly  asking the question "Why?". Each answer forms the basis of the next question. The "seven" in the name derives from an anecdotal observation on the number of iterations needed to resolve the problem and is consistent with the "magical number 7".

 

Wikipedia provides one example based on '5 whys'. Its example is based on a vehicle that will not start.

  1. Why? – The battery is dead. (First why)

  2. Why? – The alternator is not functioning. (Second why)

  3. Why? – The alternator belt has broken. (Third why)

  4. Why? – The alternator belt was well beyond its useful service life and not replaced. (Fourth why)

  5. Why? – The vehicle was not maintained according to the recommended service schedule. (Fifth why, a root cause)

It claims to have found the route cause after five: I disagree. My questioning would go on and ask:

6. Why was the vehicle was not maintained properly? (Now we are moving from technical questions to ones concerning management.)

7. I see several more 'whys' needing to asked if we are to get to the systemic root of the problem.

 

This approach can be used to counter one-reason thinking that prescribed the failure to the first fault identified.

Seven 'Whys'

Verbatim Compliance

 

you cannot possibly write a rule to cover every eventuality?

… “verbatim compliance” (Hirschhorn in Robert, 1993:148) … Hirschhorn concludes that management need to develop two classes of procedure.

The first is ‘broad in scope and applied to a wide range of circumstance’ and strictly applied.

The second is detailed and specific, employees are free to vary it ‘as long as they fulfil its intention’.

 

Hirschhorn, L. (1993) “Hierarchy versus Bureaucracy: the case of a nuclear reactor” in Roberts, K.H. (Ed), New Challenges to understanding organization, Macmillan, New York, pp. 137-149.

 

This links to Schulman’s (1993:357-8) paper on the viability of “verbatim compliance”. He argues that while much of the previous organisational theory would mean that ‘it is reasonable to expect a high degree of rigidity and formal rules’, he advocates caution in the use of such an approach.

Schulman, P.R. (1993), “The negotiated order of organizational reliability”, Administration & Society, Vol.25, No.3, pp.353-372.

Verbatim Compliance

Wilful Blindness

 

Willful blindness or Wilful blindness (sometimes called ignorance of law,[1]:761 willful ignorance or contrived ignorance or intentional ignorance or Nelsonian knowledge) is a term used in law to describe a situation in which a person seeks to avoid civil or criminal liability for a wrongful act by intentionally keeping themself unaware of facts that would render him or her liable or implicated.

Wilful Blindness

Wilful Ignorance

 

A decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.

Wilful Ignorance
bottom of page