Stop Plan and Implement – Start Experiment and Learn
Updated following very helpful comments from Brian Rivera and Sonja Blignaut.
We all had plans for 2020 but not many of them will have survived past March. So what can we learn from this? How should we think differently about planning in the future?
I have been taught that in order to achieve something you have to have a clear goal and a plan to get from A to B. This has worked for the majority of goals I have set over the course of my life. It was the approach I used when I applied to join the Corps and the one I used to plan my career.
But I am starting to challenge this assumption based on recent events and what I have learnt over the past couple of years.
I still believe that having a plan is better than no plan. If you don’t have a plan, you haven’t thought about what you need to do to get from A to B so your chances of success are limited. It’s not the plan that is valuable though, it is the act of ‘planning’, of thinking about the future and what you need to do in order to achieve the goal.
Planning works well when the outcomes of what you do are reasonably predictable. If you want to build a bridge, it’s a good idea to have a plan based on some sound engineering principles.
The problem I see these days is that we apply this framework to everything, in particular complex environments where longer-term plans rarely survive.
Complexity vs Complicated
The world is too complex for us to understand. We fool ourselves by thinking we can comprehend it but the sheer volume of complexity makes it impossible.
Complexity is often mistaken for something that is complicated. This isn’t the case. They are different and clear lines need to be drawn between the two to avoid confusion.
The Cynefin Framework created by Dave Snowden highlights the key difference between these two terms as well as two other domains (simple and chaotic) that we will all be familiar with.
Clear – If something is clear, the cause and the effect are easy to determine and highly predictable. This is the domain of the bureaucrat where best practice is an appropriate response. If you have a car crash and need to inform your insurance company, you will call them and be directed to speak to a person who has been trained to deal with that sort of issue. The system will ’sense’ what you’re trying to do by asking you some questions and getting you to respond using the telephone keypad. This allows the system to ‘categorise’ you and make sure you get directed to an insurance agent who has been trained to deal with these cases. The relationship between cause and effect is simple so we just have to see which category of response it fits into. Tasks in this domain gradually become automated as technology is developed.
Complicated – If something is complicated, the link between the cause and the effect is often difficult to establish. There could be a range of answers to the problem and all of them could be appropriate responses. The complicated domain requires experts with experience in the field who are able to establish what is going on and provide appropriate responses. They are often University trained and their training is built on years of scientific discovery and lessons learnt by our forefathers. Again, like the simple domain, the outcome is reasonably predictable so ‘plan and implement’ works well here.
For example, an engineer is tasked to build a bridge. He will assess the environment, look at the distance that needs to be covered by the bridge, the geology of the ground, the materials he has, the budget etc. He will create a plan based on sound engineering principles and the bridge will be built.
Complexity – In the domain of complexity, the relationship between cause and effect exists but it is extremely hard to establish and often unclear. Clarity only really comes with hindsight. This is the domain of wicked problems and vicious learning environments where making mistakes is the norm. This domain characterises the world in which we live. Certain elements of it might be simple or complicated depending on our experience but understanding the entire picture is complex.
Imagine you enter a room filled with furniture and obstacles and are asked to cross it and go through a door on the other side. You take a look at what’s in front of you and start to mentally map out a route. Then someone turns off all the lights, so it is pitch black and all the furniture and obstacles move to different places. The mental snapshot that you took just before the lights went out is no longer relevant. The only way you can get across to the other door is by gently probing to find out what is in front of you. If your hand touches something solid, you will have to feel your way around the obstacle to work out how to get over it or under it. You have to repeat this process hoping that you’re gradually getting closer to the exit door, but the reality is that you don’t know. Imagine that you’re halfway through the room and then someone turns on the lights and illuminates that path you’ve taken. This is the clarity that comes with hindsight. It’s useful but to judge an individual’s performance in the complex environment with the clarity of hindsight is unfair.
The coronavirus presented governments with a complex problem. The speed at which the virus was spreading wasn’t clear because it took time for us to establish a reliable testing regime. Track and trace didn’t work very well. We thought we’d overcome the worst of it in the UK when a second wave hit in December. The UK Government has made lots of mistakes but that is to be expected in a complex environment because in reality you have a very poor understanding of what is going on. People judge performance based on hindsight which is unfair.
The way you deal with a complex environment is firstly accept that there are enormous gaps in your knowledge and that you are going to make mistakes. That’s fine provided you learn quickly. The most appropriate response is to this environment is to experiment. Probe, try things out, invest small amounts of time and resource into ‘giving things a go’ and create small-scale feedback loops so that you can learn quickly. This is the only way that the path forward becomes clear is because you’re learning about what works and what doesn’t. Your feedback loops give you information and tell you where to go next.
This is the world we live in. This is why plans fail. This is why we need to start thinking about experimentation and learning.
Chaos – For completeness, the chaotic environment is one where there is no clear relationship between cause and effect. You have to act entirely on instinct and do work out whether it is the right course of action.
Imagine you are caught up in a terrorist attack, what would you do? You’d probably run away from the gunfire and the shouting and seek safety. You’re acting first because timing is critical. If you don’t do something, it could be disastrous. Soldiers are taught how to operate in the chaotic domain. When the enemy starts to shoot at you, the first thing you do is take cover and deal with the immediate threat. Whilst the soldiers are returning fire at the enemy, the Commander takes a step back to work out what is going on. They move into the complicated domain to come up with a plan which they execute together as a team. They move out of the chaotic domain and into the complicated because working together and attacking the enemy is something they know how to do, a problem they know how to solve.
The Cliff – The cliff highlights the ease at which simple domains can become chaotic ones very quickly. Many of you will have seen the awful footage of the Bradford City Stadium fire. This should have been a simple event, the domain of best practice, but complacency had crept in. Fire exits had been locked shut, litter had been left to accumulate under the stands. Simple domains can become chaotic very quickly which is illustrated by ‘the cliff’.
Confusion or Aportetic – This is the final part of the model. Confusion is where most people (including you and me) find ourselves most of the time.
It is where you are if you’re using the wrong approach in the wrong domain. It is where your experience becomes an anchor rather than an asset.
For example, an Intelligence Officer deployed to Iraq will gain an understanding of the situation on the ground. The religious and tribal dynamics, the feeling towards coalition forces etc. If he/she were to apply those lessons to Afghanistan, they’d make some mistakes. This is easy to understand as Afghanistan is different from Iraq.
But it’s amazing how often decision-makers get this wrong. Following the invasion of Iraq, there was an assumption that the country would be able to transition to some sort of democracy. The Iraqi people had been living under a dictatorship for years. There was no common understanding of the concept of democracy. We were surprised by the chaos that followed the removal of Saddam’s regime but we really shouldn’t have been. This is an example of how our intuitive understanding of democracy along with all of the supporting institutions, freedom of speech and press etc led us to make catastrophic decisions on the governance of another country.
When I left the Corps, I was taught how to implement an Operational Excellence model that was based on the principles of Lean Manufacturing. This model relied on creating standards, the one best way of doing something. It was really effective in the clear and complicated domains which was the world of nuclear manufacturing. I believed in this approach because it had been successful and had driven huge levels of performance improvement. The mistake I made was that I tried to apply to the domain of complexity and I was continuously surprised by the fact that it didn’t work! This is what it looks like when you’re operating from a position of disorder. You’re using the lessons you’ve learnt in one area and trying to apply them to another and it simply does not work.
The key with this is to be able to recognise the limits of experience and be clear on the line between what you understand and what you don’t. View your experience as an asset in some domains and an anchor in others. The faster you can identify which domain you are in, the quicker you can work out the most appropriate response and take the correct course of action.
Most of us have been taught how to plan and implement – we need to unlearn this approach and start thinking about ‘creating experiments and learning’
Share this article: