In a recent Lean Tech & AI Journal article, I suggested that future-state thinking can quietly become fiction when applied in complex adaptive systems.
The problem is not lean thinking itself. It is the assumption that the system behaves in a stable and predictable way when it does not.
In ordered environments, future-state design works beautifully. But in complex environments, something subtle happens: the system reorganizes as we interact with it. The moment we draw the map, the terrain is already shifting.
Lean is often interpreted through its tools, but its real power lies in its principles. Principles such as going to the gemba, developing people’s capability, and applying scientific thinking do not disappear outside ordered environments. They change form.
The question is not whether lean applies in complexity, but how those principles must adapt to the nature of the system we are operating within. This raises an important practical question: How do we know what kind of system we are operating in before we choose improvement methods?
This is where the Cynefin framework, developed by Dave Snowden, becomes extremely useful. Cynefin is not a decision tree or categorization scheme. It is a sensemaking framework designed to help leaders understand the nature of the environment they are operating within so that they can take appropriate action.
The Cynefin framework contains five domains that sit within three broader ontological states: order, complexity, and chaos.
The five domains represent different types of system dynamics and, therefore, require different leadership behaviors and improvement approaches. The five domains are:
“Cynefin” is a Welsh word that refers to the many factors in our environment and experiences that influence how we perceive and understand situations. This is an important insight. Cynefin does not attempt to simplify reality into rigid categories. Instead, it provides a way to make sense of situations where uncertainty, ambiguity, and multiple interacting variables are present.
Most leaders do not begin with clarity. They begin in what Cynefin calls the aporetic or confused domain. This is the state we enter when confronted with ambiguous environments where the nature of the problem itself is not yet clear. We do not yet know the:
The first task of leadership is, therefore, sensemaking. Once we begin to understand the environment, we can move into one of the other domains where different approaches become appropriate.
Two Cynefin domains sit within the ordered state — clear and complicated — and both environments are characterized by stable cause-and-effect relationships.
The clear domain contains known knowns. Cause-and-effect relationships are obvious, repeatable, and predictable.
Decision making follows the pattern: Sense → Categorize → Respond
Current best practices exist and can be reliably applied. Examples include routine operational work such as:
This is where many familiar lean tools perform extremely well:
In the clear domain, lean principles manifest as discipline and adherence.
Going to the gemba confirms standards are being followed and abnormalities are visible.
Developing people focuses on building capability in executing and sustaining known best practices.
Scientific thinking is applied through incremental improvement within stable boundaries.
Future-state design works effectively here because the system behaves predictably.
The complicated domain contains known unknowns. Cause-and-effect relationships exist but are not immediately obvious. They require analysis and expertise.
Decision making follows the pattern:Sense → Analyze → Respond
Examples include:
Here we rely on expert analysis, modeling, and structured problem solving. Lean methods still apply, but improvement depends on technical expertise rather than simple observation.
Here, lean principles shift toward structured inquiry.
Going to the gemba is not just observation but deep inquiry and engagement with the system.
Developing people means building expertise, analytical capability, and shared understanding across disciplines.
Scientific thinking becomes more formal, often supported by models, simulations, and advanced problem-solving methods.
Complex Domain
The complex domain contains unknown unknowns. Here, cause-and-effect relationships cannot be predicted in advance. They only become clear in retrospect. In complexity science this phenomenon is known as retrospective coherence, the ability to explain events after they have occurred, even though they could not have been predicted beforehand.
Decision making therefore follows a different pattern:Probe → Sense → Respond
Complex systems contain:
Examples include:
This is where many organizational transformation efforts actually operate. And this is precisely where traditional future-state design becomes fragile, because the system adapts to our interventions, and long-range planning becomes speculative.
Instead of designing a fixed destination, leaders must operate directionally through safe-to-fail experiments, observing the patterns that emerge.
This is where the Vector Theory of Change, described in my last article, becomes valuable. Rather than defining a fixed end state, leaders define a direction of travel and navigate through iterative target conditions that allow learning to occur as the system evolves.
In the complex domain, lean principles do not disappear, but they operate differently.
Going to the gemba becomes less about confirming known processes and more about understanding interactions, relationships, and emerging patterns.
Developing people shifts from teaching solutions to building adaptive capability, the ability to run experiments, interpret signals, and adjust behavior in real time.
Scientific thinking remains central, but instead of executing predefined experiments, teams run safe-to-fail probes designed to generate learning rather than validate hypotheses.
In this sense, PDCA does not vanish in complexity; it becomes faster, more distributed, and less deterministic.
The chaotic domain contains unknowables. There is no perceivable relationship between cause and effect.
Decision making follows the pattern:Act → Sense → Respond
Examples include:
In this domain the goal is not optimization or improvement. The goal is stabilization. Leaders must act quickly to restore some degree of order before the system can move into a more manageable domain.
In chaotic environments, lean principles compress into rapid action.
Going to the gemba is immediate and direct: act, observe, and decide what to do next.
Developing people is less about learning and more about enabling decisive action. This is where directive leadership becomes necessary. There is no time for extended deliberation.
Scientific thinking is reduced to rapid cycles of action and feedback, often under extreme time pressure. PDCA becomes hours or minutes, not days or weeks.
One challenge for lean practitioners is that many of the tools we use were developed in ordered environments, particularly manufacturing. As a result, many lean failures are not failures of lean thinking, but failures of context recognition.
Lean does not fail in complex environments. It is often reduced to tools designed for ordered systems, while its underlying principles — observation, learning, and capability development — are not adapted to the nature of complexity.
The Toyota Production System evolved largely in environments where cause-and-effect relationships were stable enough to observe and improve through structured experimentation. Assembly lines, machining processes, and logistics systems typically operate in the clear or complicated domains. In those environments:
However, when organizations attempt to apply lean thinking to areas such as strategy, product development, culture change, or AI adoption, the nature of the system often shifts. These environments frequently operate in the complex domain.
When practitioners mistakenly assume they are in an ordered environment, they attempt to apply tools designed for stability to systems that are inherently adaptive. This often leads to:
When the system inevitably adapts, the plan quickly becomes outdated. The map stops reflecting the territory — echoing Alfred Korzybski’s famous observation that “the map is not the territory.”[i]
Cynefin helps leaders avoid this mistake by asking a simple but powerful question: What kind of system are we actually operating in?
An important concept when selecting improvement approaches is bounded applicability, which recognizes that every method, framework, or tool has limits, and those limits are highly dependent on context. A method may work extremely well in one type of system and perform poorly in another. This does not mean the method is wrong. It means the context has changed.
Bounded applicability becomes visible when the cost-benefit ratio of applying a method begins to deteriorate. Practitioners often respond to this situation by applying the method with greater intensity and rigor, more analysis, more tools, more effort, when in fact the signal may be telling us to do something different. We may simply be approaching the boundary where that method is no longer appropriate.
At this point the question is no longer:“How do we apply the tool better?”
The more useful question becomes: “Are we using the right type of tool for this environment?”
This is precisely where the Cynefin framework becomes valuable. It helps leaders recognize when the nature of the system has shifted and when a different class of approaches may be required. Cynefin reminds us that different domains favor different approaches:
| Domain | Decision Pattern | Typical Methods |
| Clear | Sense → Categorize → Respond | Standardization and best practices |
| Complicated | Sense → Analyze → Respond | Expert analysis and engineering |
| Complex | Probe → Sense → Respond | Experiments and emergent practices |
| Chaotic | Act → Sense → Respond | Stabilization |
What changes across these domains is not the presence of lean thinking, but how it is expressed. Bounded applicability reminds us that methods aligned with one domain may not perform well in another. Recognizing the boundaries of our methods is not a weakness in leadership. It is often the first signal that the nature of the system has changed.
Another important insight from Cynefin is that systems do not remain in a single domain permanently. Organizations frequently move between domains as conditions evolve. For example, a new initiative may begin in the complex domain, where uncertainty is high and experimentation is required. As patterns begin to emerge, the work may move into the complicated domain, where expertise and analysis refine solutions. Once stabilized, aspects of the system may enter the clear domain, where practices can be standardized and optimized.
Movement across domains, however, does not always follow such a neat progression. Organizations can be pushed suddenly into the chaotic domain, often as the result of a critical incident or major disruption. In these situations, there is no time for analysis. The appropriate response pattern becomes: Act → Sense → Respond
Leaders must first take decisive action to stabilize the system, then observe the effects of those actions before determining the next steps. In many ways, this can resemble an accelerated learning cycle. This can be understood as a compressed form of PDCA, where cycles of action, feedback, and adjustment occur rapidly under conditions of high uncertainty.
As stability begins to return, the system typically moves back into the complex domain, where teams can begin running safe-to-fail experiments to better understand emerging patterns.
Improvement methods must therefore evolve as the system evolves. Recognizing these domain shifts is a critical leadership capability.
It is also important to recognize that not every problem will eventually migrate into an ordered domain. Many organizational challenges, such as culture, strategy, or innovation remain inherently complex. This means we cannot simply apply tools designed for ordered systems to complex environments.
Understanding where you are at any moment in time is essential for selecting the right approaches and making better decisions.
Product development illustrates how these domain shifts occur. When a company begins exploring a new product concept, it typically operates in the complex domain. Customer needs may be unclear, technologies may be evolving, and interactions between components are uncertain.
In this stage, the most effective approach is to run safe-to-fail experiments:
This follows the pattern of Probe → Sense → Respond.
As learning accumulates and the product architecture becomes clearer, work begins to shift into the complicated domain. Engineering expertise becomes central as teams analyze trade-offs, refine designs, and optimize performance.
Once the product reaches production and processes stabilize, much of the work moves into the clear domain. The focus then shifts toward:
In other words, the system transitions from emergence to optimization. Recognizing these shifts allows leaders to apply the right methods at the right time. Otherwise, attempting to standardize too early suppresses discovery, or attempting to experiment indefinitely prevents stability.
A product team exploring a new AI-enabled feature may not know what customers actually value at the outset. So rather than defining a detailed future-state design, the team might release small experimental features to a subset of users, observe behavior, and adapt based on what emerges.
Going to the gemba in this context means engaging directly with user interaction data and real-world usage.
Developing people means enabling teams to interpret ambiguous signals and make decisions without waiting for certainty.
This is lean thinking operating in a complex domain, not through standardization, but through disciplined learning.
Operating effectively in the complex domain requires another important leadership capability: weak signal detection. Because patterns in complex systems only become coherent retrospectively, leaders must pay close attention to early, albeit weak, signals that may indicate emerging changes.
Weak signals — which are easy to dismiss because they often appear insignificant at first — can appear as:
Another important factor to consider in complex environments is the presence of what complexity practitioners sometimes refer to as “dark constraints.” Constraints shape the behavior of systems, but not all constraints are visible or easily understood. Dark constraints are influences that affect system behavior without being immediately apparent to those operating within the system, including:
Because dark constraints are often invisible, they can shape system behavior in ways that are difficult to explain in the moment. Their effects may only become visible retrospectively, once patterns begin to emerge.
Organizations that develop the ability to detect weak signals and become curious about the constraints shaping system behavior are better positioned to navigate complex environments. In complex systems, the challenge is rarely the absence of signals. It is our willingness to notice them.
Before selecting improvement methods or launching transformation efforts, leaders must first understand the nature of the system they are trying to change. The Cynefin framework provides a way to make sense of that system.
Lean provides principles for how to engage with it. The critical insight is that those principles do not remain static. They evolve depending on whether we are operating in order, complexity, or chaos:
The failure is not in lean thinking, but in applying it without regard to context.
When combined, Cynefin and lean do something neither can do alone: they allow leaders not just to improve systems, but to navigate them. And in environments where the terrain is constantly shifting, navigation matters more than prediction.