At a time when many fear the imminent takeover of their jobs by intelligent robots, it was encouraging to read the Lean Sensei Women’s thoughtful Post, Why We Believe People-Free Plants and Services Prevent Learning,which provides a variety of perspectives that essentially argue that “the assumption that people-free plants or services would fare better is totally overlooking what humans bring into the operation: thinking, and particularly lean thinking!”
Their stories remind me of how my own techno-optimistic view was uprooted during my first visit to a Japanese lean factory eight years ago. The facility was Toyota’s industrial equipment plant in Takahama, our first stop in a Kaizen Institute benchmark tour.
As we entered the building, I was anticipating an “advanced” environment replete with sleek robots, bright lights, high tech displays, and workers sitting behind computer screens. What I saw was almost the exact opposite: bare, dimly-lit production areas, whiteboards and hand-written signs, busy workers engaged in multiple skilled tasks. Could this really be the legendary Toyota?
A few minutes later, as we walked down an assembly line, employees suddenly came running from several directions towards a work area. I was a little confused until somebody pointed out that a worker had pulled the andon. The rapid teamwork that took place to resolve the problem made quite an impression on me. I thought, “so that’s what this is all about!”
During the eight ensuing years, I learned many lessons about just how important humans are to high-performing organizations. But machines are a lot smarter today thanks to recent developments in Artificial Intelligence (AI), and technology is advancing at an exponential rate. Are we approaching a tipping point where robots will take over the shop floor thinking I saw in Takahama?
To explore this question, we need to look at what AI can and can’t do. Most AI activity, and accompanying exuberance, centers around a segment called deep learning or machine learning, which accounts for jaw-dropping progress in speech recognition, image recognition, and language translation. It is also touted as the killer app for robots with human-like intelligence.
Deep learning simulates aspects of the human brain, with arrays of “neurons” and “synapses” forming their own self-correcting networks through a technique called neural mapping. Instead of being programmed for specific tasks, the networks “learn” from examples.
We can train AI to recognize a dog, for example, by feeding it images with or without dogs, labeled accordingly. The program learns by trial and error, as opposed to being instructed by algorithms, making the parallel with the human brain seem plausible, and leading many to believe that we are on the threshold of synthesizing human thought.
The gap between artificial and human intelligence, however, is actually much larger than it appears. In his paper “Deep Learning: A Critical Appraisal”, psychology professor and AI entrepreneur Gary Marcus cites a number of areas where deep learning falls short:
- Deep learning is data-hungry. AI has to see millions or even billions of images in order to reliably distinguish, say, a dog from a cat. A small child, on the other hand, can learn to make this distinction based on a few encounters.
- Deep learning can only identify a dog by recognizing pixel patterns. It does not learn that a dog is a living creature, or what a living creature is. Similarly, AI can’t understand concepts like “process” or “customer value”, let alone the similarities between a factory process and a healthcare process.
- Deep learning only works well in a stable environment, and has a tendency to crash or give highly unreliable answers when conditions change.
- Although deep learning can recognize correlations, it cannot identify cause and effect relationships.
- Deep learning lacks common sense and other aspects of everyday thinking that we take for granted. “No healthy human being would ever mistake a turtle for a rifle or a parking sign for a refrigerator,” writes Marcus.
Some argue that rapid advances in AI, fueled by a seemingly limitless supply of computing power, will soon close these gaps. However, brain science is also advancing very rapidly, and recent studies indicate that our thinking brain is far more complex, and sophisticated, than previously believed.
In her excellent 2017 book How Emotions are Made, neuroscientist and author Lisa Feldman Barrett explains that the classic representation of the brain, which describes rational thought as an autonomous function within the neo-cortex region, has been thoroughly disproven.
“[The] illusory arrangement of layers, which is sometimes called the ‘triune brain’, remains one of the most successful misconceptions in human biology,” writes Barrett. “Carl Sagan popularized it in The Dragons of Eden, his bestselling (some would say largely fictional) account of how human intelligence evolved. Daniel Goleman employed it in his best-seller Emotional Intelligence. Nevertheless, humans don’t have an animal brain gift-wrapped in cognition, as any expert in brain evolution knows.”
Barrett shows that the rational and irrational components of our thinking are not independent and localized, but deeply intermeshed, with thought processes traversing back and forth in a figurative game of catch-ball between our immediate sensory input and our remembered experiences.
Our irrational self, therefore, is not just a troublesome aspect of our consciousness – it is essential to how we think. And our thinking brain is far more complex than a glorified version of the neural networks we create with AI.
What I find exciting about these observations is that the creators of the Toyota Way appear to have subscribed to this more up-to-date view of human thinking . Whether or not one attributes this to an Eastern outlook, much of the wisdom passed on from lean’s original pioneers highlights the importance of direct experience, and the view that workplace situations cannot be understood with purely rational thought. We can see this through:
- The emphasis on developing people by creating opportunities for them to learn by experience
- Reliance on genchi genbutsu (go to see) as opposed to rules, formulas, and hearsay
- A reluctance of Ohno and his disciples to use words and labels
- The use of metaphors like nemawashi (prepping the soil)
- Acceptance of human fallibility – instead of denying it, we develop awareness of it and skills for mitigating it
- Encouragement of “out of box” thinking and openness to unconventional ideas that might not at first sound rational
Of interest here is Jon Miller’s recent clarification of the phrase “respect for people”. “The original Toyota Way concept is respect for ningensei (????which is ‘humanity’ or ‘human nature’”, he writes in the Gemba Academy Blog. “The word is not ‘people’ but rather the nature of people. It describes what makes us human.”
“One can take ningensei further,” Miller told me in an email. “The ningensei of people means their human nature. We can also talk about the ningensei of a production system, or of AI. In other words, what features give humanity to AI, a policy or a production system?”
It seems to me that this is a key area where lean and hierarchical management part ways. Lean is a countermeasure for a complex physical world that is, as Deming taught us, rife with variation. The human brain has evolved to cope with such a world, and is our best tool for solving the myriad problems that inevitably crop up in the workplace. Yes, artificial intelligence can help us, but if we want to achieve outcomes that are beneficial to humans, we have to teach it ningensei.
Traditional management theorists would probably have little time for this distinction. Their model is based on a deterministic view of the world where physical and social phenomena occur in relatively predictable ways. For them, it makes sense to rely on computerized management reports rather than going to the gemba, and for similar reasons, to rely on robots to do our shop floor thinking.
In other words, if we think like machines, we will be more inclined to let machines do our thinking for us.
There’s no question that artificial intelligence will take over much of the repetitive and predictable thinking in the workplace, leading to huge productivity gains. But as the Lean Sensei Women have illustrated in many ways, organizations can’t learn and evolve as we’d like them to without humans to guide them, at least for the foreseeable future.
So learning was what the andon incident at Takahama was really about. Each participating team member brought his or her own unique experience to the table, and the issue was resolved in seconds thanks to a merging of their collective knowledge and perceptions. This ability to bring diverse viewpoints together towards shared goals is the “killer app” that has allowed humans to evolve in a challenging world, and when it comes to adapting to change and finding ways to get better, it’s still the best thing we have going.
The Future of People at Work Symposium
Collaborating on the Challenges Ahead.