Lean practitioners have long been taught that improvement culminates in a designed “future state.” We map the current condition, identify waste and constraints, and then create a future-state value-stream map (VSM) that shows how the system should operate.
In stable and ordered environments, this works beautifully, but in complex adaptive systems, something subtle happens. The future state begins to drift the moment we draw it, and if we are not careful, the future-state map becomes less of a navigational aid and more of a comforting fiction.
This caution is particularly relevant now as organizations rush to establish fixed roadmaps for the application of AI in environments where the underlying technology, regulatory landscape, competitive dynamics, and use cases are evolving at extraordinary speed. Designing a detailed “AI future state” under such conditions can create the illusion of certainty in a domain defined by rapid adaptation.
This is not an argument against lean. It is an argument for remembering what lean already knows.
Complex Adaptive Systems
A complex adaptive system is a network of interdependent agents — people, teams, technologies, policies — interacting in ways that continuously reshape the system itself. Behavior emerges from those interactions rather than from central design. Cause and effect are not stable or linearly predictable; they only make sense in retrospect.
Small changes can produce disproportionate impact, constraints shift over time, and the system adapts as it is observed and influenced. In such environments, control gives way to navigation, and progress depends less on detailed prediction and more on disciplined experimentation and directional coherence.
Dave Snowden, whose Cynefin framework has influenced many lean leaders wrestling with complexity, describes the complex domain as “one where all actors and objects are entangled in many ways that cannot be fully understood. Everything is connected, but the impact of change cannot be predicted.”
He goes on to say, “One simple heuristic is to ask: ‘Does the evidence support conflicting hypotheses about what we should do, and we can’t resolve what is the right thing to do within the time frame for action?’ If the answer is yes or maybe, then it is complex”.
A Lesson from Toyota: Vision, Not Blueprint
Jeffrey Liker recently recalled seeing a future-state VSM created during the planning of Toyota’s truck plant in San Antonio. It was hand-drawn on A4 paper. No numbers. No detailed execution plan. It was conceptual, a directional guide for material and information flow.
That detail matters. The map was not a deterministic plan. It was a sense of direction. It answered a simple but powerful question: Which mountain are we trying to climb?
The danger arises when we mistake the mountain for the path.
In complex environments — software systems, supply chains under geopolitical tension, healthcare delivery, innovation ecosystems — cause and effect are not stable. The system reorganizes as we interact with it. As Snowden describes, coherence is retrospective. We understand why something worked only after it has happened.
In these contexts, a highly specified future state assumes something that does not exist: Stability.
The Elevation Problem: 30,000 Feet vs. Ground Level
An important nuance is elevation. At 30,000 feet, imagining a simple future state is not only acceptable, but it may also be necessary. The error comes when we zoom in too quickly and assume we can design the future at the same resolution as the present. This distinction is critical.
At altitude, future state is directional. At ground level, it becomes presumptive.
Lean practitioners often oscillate between these two without realizing it. We move from high-level aspiration (“one-piece flow,” “pull,” “customer-first value creation”) to detailed box-and-arrow implementation plans that assume linear causality.
In ordered systems, this translation works. In complex ones, it breaks down quietly.
When the Map Becomes Political
When we freeze a future state in a complex domain, the map can stop being a decision aid and become a political artifact.
It becomes:
- A justification for budget
- A symbol of control
- A signal that leadership “has a plan”
But the system itself is already drifting.
At that moment, we are no longer navigating reality; we are defending an artifact. Lean never intended that.
The purpose of mapping is understanding. Before improvement, a more fundamental question must be answered: Do you know how you do what you do?
Current-state mapping surfaces:
- Constraints and bottlenecks
- Feedback loops and dependencies
- Shadow work and informal workarounds
- “Dark constraints” that do not show up in formal processes
- Where decisions are actually made
That knowledge is essential in any domain, ordered or complex. The question is not whether to map. The question is what we assume after we map.
Lean Already Knows This
Lean thinking has always contained the antidote to over-specified future states. We just don’t always apply it consistently.
- The North Star is intentionally unreachable.
- PDCA is a learning cycle, not execution theater.
- Lean focuses on the current condition and short-term target conditions, not distant fixed futures.
As an example: Mike Rother’s Improvement kata does not begin with a five-year blueprint. It begins with:
- Understand the direction or challenge.
- Grasp the current condition.
- Establish the next target condition.
- Conduct experiments to get there.
Notice what is absent: a detailed long-term design.
Set a direction of travel, not a destination. Take a step. Observe what shifts. Learn. Re-orient. Repeat.
The Vector Theory of Change
The Vector Theory of Change reframes transformation from designing a fixed destination to setting a direction of travel. Instead of attempting to specify a detailed future state in advance, we establish a coherent vector, an orientation grounded in purpose and constraints, and move iteratively through short, safe-to-learn steps.
Each step generates feedback, reveals shifting patterns, and informs the next adjustment. In complex adaptive systems where outcomes cannot be fully predicted, change is achieved not through blueprint execution but through disciplined navigation: choose a direction, act, sense what emerges, and re-orient continuously.
In many ways, the Vector Theory of Change is simply PDCA expressed at the system level. A vector is defined by direction and magnitude, not a fixed endpoint.
In complex systems:
- You cannot reliably specify the destination.
- You can specify a direction of travel.
- You can amplify or dampen signals.
- You can run safe-to-fail experiments.
- You can monitor shifts in constraints.
The difference between a blueprint and a vector is subtle but profound.
A blueprint says: This is where we will land.
A vector says: This is the direction that makes sense given what we know now.
The blueprint assumes stability. The vector assumes adaptation.
Complexity Is Not Chaos
One concern often raised is that abandoning detailed future states risks drift or lack of accountability. That concern is valid, but it misunderstands complexity.
Complex systems are not chaotic. They are constrained, but dynamically so. Patterns emerge. Signals strengthen and weaken. Local interactions produce systemic shifts.
The task of leadership in such environments is not prediction. It is sensemaking.
Mapping, when paired with sensemaking, becomes far more powerful.
Instead of asking:
- “How do we design the perfect future state?”
We ask:
- “What constraints are shifting?”
- “What weak signals are emerging?”
- “What adjacent possible has opened up?”
- “What small nudge might produce disproportionate impact?”
This reframes improvement from execution to navigation.
Adjacent Possibles and Short-Term Target Conditions
Lean practitioners may recognize this reframing immediately.
Toyota kata’s “next target condition” is essentially an exploration of the adjacent possible, what is achievable from the system’s current state without assuming distant predictability.
You do not leap to a five-year configuration. You move to the next stable foothold.
In lean’s own teaching materials, the visual progression from:
- Direction or challenge
- Current condition
- Next target condition
- Experiments
is a vector-based model.
The “mountain” metaphor matters here. We choose which mountain to climb (direction). But the path unfolds step by step, and obstacles emerge that cannot be fully anticipated in advance.
If we insist on drawing the entire path at the start, we risk optimizing for terrain that will shift beneath us.
When Future State Works — and When It Doesn’t
This is not an argument to abandon future-state thinking entirely.
In ordered domains — stable production systems, predictable demand patterns, tightly controlled technical environments — a designed future state can be extremely effective.
The error is domain blindness. If we are in an ordered domain, design makes sense. If we are in a complex domain, over-design becomes fragile.
The lean community is deeply experienced in structured improvement. The challenge now is to add domain awareness.
Not all systems behave the same way.
The Risk of Optimizing a System That No Longer Exists
Perhaps the greatest risk of rigid future-state thinking in complex environments is that by the time we implement the designed state, the system has already changed. We end up optimizing a system that no longer exists.
This is not theoretical. We see it in:
- Digital transformations that freeze architecture before understanding usage patterns.
- Healthcare redesigns that assume static patient flows.
- Supply chain “future states” that collapse under geopolitical volatility.
- Knowledge work systems where work reorganizes faster than governance updates.
- Enterprise AI programs that lock in architecture, governance models, or capability assumptions before the technology landscape stabilizes.
The lean toolkit remains relevant. But it must be applied with epistemic humility.
Returning to Lean’s Roots
If we strip away the artifacts and return to core lean principles, we find:
- Respect for reality
- Direct observation (genchi genbutsu)
- Iterative experimentation
- Learning over certainty
- Direction over perfection
The North Star was never meant to be reached. It was meant to orient.
The future-state map, when used well, was never meant to freeze the future. It was meant to provide direction. The moment we forget that distinction, we move from learning system to a control system. And control systems struggle in complexity.
Toyota’s strength was never in predicting the future; it was in building capability to adapt faster than competitors.
A Provocation for Lean Leaders
For lean thinkers, the invitation is not to abandon VSM or future-state thinking. It is to ask a sharper question: Are we designing a destination, or are we setting a direction?
If your future state assumes:
- Predictable cause and effect,
- Stable constraints,
- Linear implementation, and
- Controllable outcomes,
you may be optimizing a system that is already reorganizing.
If, instead, your improvement work:
- Anchors in the current condition,
- Identifies shifting constraints,
- Chooses a direction of travel,
- Moves through short-term target conditions, and
- Treats plans as hypotheses,
you are not abandoning lean. You are practicing it at a deeper level.
The discipline required in complexity is not less than in ordered systems; it is different.
Final Thoughts
The debate is not future state vs. no future state. It is blueprint vs. vector.
Lean, at its heart, has always been about disciplined learning in the face of uncertainty. Complex adaptive systems simply force us to take that seriously.
The real question is not whether to draw a future state. It is whether we have the humility to treat it as a hypothesis rather than a promise.
When future state becomes fiction, return to direction.
When design becomes rigid, return to PDCA.
When the map becomes political, return to observation.
Lean does not break in complexity. It matures.