In previous articles in this four-part series, we explored why the impact of AI varies so widely across organizations. The technology itself is increasingly accessible and powerful, yet results remain uneven. Some teams see meaningful gains, while others struggle or stall. The difference is not the tools. As discussed earlier, technology is the easiest part of the equation, and behavior, the skills and habits needed to use AI effectively, must be deliberately built over time. That brings us to the final and often most overlooked factor in the equation: management.
Impact = Technology × Behavior × Management
Not management as gatekeeper or approver of work, but management as system designer. The way leaders design routines, standards, feedback loops, and escalation paths largely determines whether AI becomes a force for learning and improvement, or a source of confusion and wasted effort.
The Fast vs. Slow Paradox
AI dramatically accelerates thinking. With modern language models, an individual can generate ideas, draft explanations, explore alternatives, and test hypotheses in seconds. I have experienced this firsthand building lean coaching tools over the past year. Work that once took weeks can now happen in hours or less. It used to take me weeks to build something as simple as a basic website. Today, that same work can be done in hours, or at most a couple of days.
More recently, I took LEI’s Lean Lexicon and turned it into a working web application in roughly two days. It includes definitions, examples, images, one-point lessons, and an interactive prompt to help people explore concepts in context. That kind of speed would have been unthinkable not long ago. We will likely show it at the LEI Lean Summit in Houston, and I think people will like it.
This fast-thinking capability lowers the cost of experimentation and makes it easier to learn by trying.
At the same time, this speed introduces new risks. AI systems can be confidently wrong. They hallucinate. They produce plausible but flawed reasoning. I have seen a model generate a perfectly structured root-cause analysis that sounds authoritative but misses a critical physical constraint. If you do not catch it, you are moving fast in the wrong direction.
This tension resembles what psychologist Daniel Kahneman described as two modes of thinking, fast intuitive System 1 thinking and slow deliberate System 2 thinking.i Both are necessary. Problems arise when one dominates without the other.
AI behaves the same way. Used well, it accelerates exploration and learning. Used poorly, it creates noise. The management challenge is not choosing between fast and slow, but designing systems that intentionally combine both. The same speed that lets you build in days can just as easily scale a bad assumption.
Harnesses for AI and Humans
In practice, no one expects raw AI output to be sufficient on its own. Serious AI deployments rely on harnesses, structures that guide and constrain fast generation so it becomes useful. These include prompting, context engineering, access to reliable data, and feedback loops. Without them, AI produces inconsistent results. With them, the same technology becomes far more dependable.
I learned this through direct experience. When I first started building lean AI tools, raw model output was maybe 60% useful. Once I added structured prompts, Toyota-based problem-solving frameworks, and feedback mechanisms, the same model consistently jumped to 80-90% and even surpassed that level. The model did not change. The “harness” did. That is both a management insight and a technical one.
The same principle applies to human work. People benefit from fast thinking, too, trying ideas quickly, sketching solutions, and exploring alternatives. But unmanaged fast thinking leads to inconsistency, rework, and false confidence. Lean organizations learned long ago that improvement comes from systems that support learning, including problem framing, standard work, PDCA, coaching, and escalation.
AI gives everyone on your team the ability to think fast. The question is whether your management systems provide the slow deliberate counterweight that turns speed into learning and sustained results.
Three Common Management Mistakes
When organizations struggle with AI, their responses tend to fall into three predictable management mistakes. Each is well intended. Each produces disappointing results for different reasons.
Lock everything down: Require approvals. Route experimentation through IT. Prohibit external tools. This response is driven by legitimate concerns about security, misinformation, and liability. But when it takes weeks to get approval to try something, people stop experimenting. Learning by doing quickly dies at work. This often shows up as a blanket ban. No information, especially sensitive information, can leave the four walls of the building. There is truth to the concern. Nuclear secrets, product designs, cost data, personally identifiable information, and health data absolutely require strict controls. But much of the work where AI could add value — such as training, reviewing, problem solving, kaizen, and coaching — does not involve proprietary intellectual property at that level. Sorry, but most company data is not that special. Treating all information as equally sensitive stymies speed and learning and promotes shadow usage as individuals turn to personal tools outside the system.
Mandate AI use without discrimination: In some organizations, particularly in software and technical fields, some leaders have begun requiring AI usage. The message, sometimes explicit and sometimes implied, is that if you are not using AI, you will not be working there. The intent is speed and competitiveness. The problem is that AI is genuinely ready for some tasks, such as boilerplate code, exploratory analysis, and draft generation, but not yet reliable for others, especially safety critical, highly technical, or tightly constrained work. When usage is mandated without clear guidance, problem classification, or quality checks, people either trust AI where it should not be trusted or quietly work around the mandate to get the job done correctly.
Provide tools and remove all structure: At the other extreme, some organizations simply buy Microsoft Copilot, give everyone access, and tell people to experiment. This sounds empowering, but without shared routines and standards, learning remains individual and fragile. One person figures out something useful. Another gets inconsistent results and gives up. A third never knows what is possible. There is no mechanism to capture, standardize, or spread what works. This is lean wallpaper applied to AI. Everyone has access, but AI is not integrated into the work.
In all three cases, the failure is the same. Management treated AI as a tool decision rather than a system design problem.
What Good Management Looks Like
The inverse of those three mistakes is not complicated. It means clarifying which problems are ready for AI and which are not before handing people a tool. It means establishing lightweight routines for experimentation and reflection (not one-time training sessions) and repeating PDCA cycles applied to AI use itself. It means treating standards as enablers rather than constraints, for example requiring that a problem-solving report runs through a coaching check before submission. It means building feedback loops for both the humans and the AI tools, because prompts need refinement just as skills do. And it means designing escalation paths where bad AI output leads to learning, not blame.
None of that is revolutionary. It is basic management discipline applied to a new capability. The challenge is that most organizations skip it, treating AI as a procurement decision rather than a system design problem.
Closing the Loop
In the first article of this series, I laid out the premise that getting results from AI requires three factors — technology, behavior, and management. I argued that technology is the easy part of the equation. Behavior, the skills and habits that make technology effective, must be deliberately built over time. Management is what ties the two together, designing the systems in which people and tools actually produce results. The equation remains:
Impact = Technology × Behavior × Management
If any factor approaches zero, the product collapses. That was true for andon boards in the 1960s, for ERP systems in the 1990s, and it will be true for AI in the 2020s. The pattern repeats because the underlying logic has not changed. Technology creates potential. Behavior converts potential into action. Management sustains action into results.
What has changed is the clock speed.
At Toyota, the progression from installing an andon system to developing the full chain of human response — team member to team leader to group leader to maintenance to engineering — unfolded over years. The technology evolved incrementally. Leaders had time to observe, coach, and adjust.
Technology creates potential. Behavior converts potential into action. Management sustains action into results.
AI compresses that timeline. The technology is improving month to month rather than decade to decade. People are expected to adopt new tools while the tools themselves are still changing. The gap between what AI can do and what organizations know how to do with it is widening faster than most management systems can close it.
It is worth noting that Toyota and Denso are not standing still. In 2025, five Toyota Group companies launched the Toyota Software Academy to develop AI and software skills across organizations, with roughly 100 training courses. Toyota simultaneously launched its Global AI Accelerator (GAIA) to expand AI research, development, and implementation across 11 categories ranging from manufacturing and vehicle engineering to knowledge retention and office productivity. Toyota explicitly rooted GAIA in its longstanding practice of jidoka: automation with a human touch.ii Denso, for its part, partnered with the University of Tokyo on a program to enhance lean manufacturing with AI, specifically targeting the transfer of tacit knowledge from experienced engineers to newer workers.iii
Having spent years working for Toyota, I can infer from pattern and from what these companies have reported publicly. The structure of these initiatives suggests a management approach that mirrors the fast-and-slow dynamic described in this article: rapid, controlled experimentation at the local level, where engineers and team members try AI tools on real problems, and slower, more deliberate decisions at the management level about how to standardize, scale, and integrate what works. That combination, fast learning within a disciplined management structure, is not new for Toyota. It is how they have always absorbed new technology. AI just raises the clock speed.
That is the new challenge. Not a new formula, but a faster one. The same three variables, multiplying together, but with less room for the slow institutional learning that past technologies allowed.
I do not think the answer is simply to speed up management to match AI. Hasty management systems are fragile ones. The answer is what Toyota figured out decades ago with production: build stable principles that can absorb rapid change. Standardized work does not resist variation; it absorbs it and creates the next new standard. PDCA does not slow you down, it keeps speed from becoming reckless. Coaching does not compete with tools; it teaches people how to use them.
The organizations that will benefit most from AI are not the ones moving fastest. They are the ones whose management systems can learn at the speed this technology demands
The organizations that will benefit most from AI are not the ones moving fastest. They are the ones whose management systems can learn at the speed this technology demands, without losing the discipline that makes learning stick.
We are early in this experiment. Most organizations are still at the stage of buying the technology and hoping for results. The ones that pull ahead will be those that invest equally in the behaviors and management systems that turn capability into performance. That has always been the difference between organizations that sustain improvement and those that do not. AI does not change that. It just raises the stakes.
Technology is powerful. Behavior takes time. Management makes it last.
Humans + AI > Problems. But only if all three parts of the impact equation are in play.
I will be hosting a half-day workshop at the LEI Lean Summit in Houston, March 12-13. The general topic is Problem Solving and AI and how you can harness AI to accelerate your problem solving without letting it do the thinking for you.
I’ll cover the “Five Levels of AI Framework” and give some examples you can try out using different tools at each of the file levels. We’ll start with some basic prompt engineering advice and show some more advanced features you can build on your own.
For the workshop you will need to bring a laptop with some type of large language model (LMM) access and a real problem to work on. Hope to see you there!