The Default-Thinking Method of Problem Solving
You’ve been here before. It’s Monday morning and you walk into the office only to have your boss call an urgent meeting to “streamline processes.” You haven’t thought about this enough to have an opinion but you go anyway.
You know how to deal with this. You’ve done it before. You turn on your default brain and start solving the problem. You build a hypothesis to determine the problem, find some data to analyze, and presto out comes some efficiency.
Most of the time this works well enough, but not always. Sometimes—more often than we’d like to admit—things change: markets shift and consumers behave in unpredictable ways. Now we’re rudderless.
In the wonderful book The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems, the authors write:
We are forever in the midst of change, but not all of it is seismic. It’s vital for a business to understand the difference between the uncertainties present on an average day and the uncertainties of a major cultural shift. … Business issues can be categorized along a problem scale within three levels of complexity . This framework is useful for distinguishing very complex problems from those that are actually manageable.
The Three Levels of Business Problems
1. A clear-enough future with a relatively predictable business environment. You know what the problem is, and you can apply a proven algorithm to fix it. “If I invest $ 1 in media spending for advertising, I know that I will get something like $ 1.5 back because of market stimulation.” “The industry has average admin costs of 8 percent of total revenue. Mine are 10 percent. We should cut that back.”
2. Alternative futures with a set of options available. You have a feel for the problem and might have seen something like it before. It makes sense to test your hunch as a hypothesis. For example, “Our sales numbers are down even when we invest in more salespeople, but we have seen the same pattern in the European Union and China. We might be hiring too many new salespeople too quickly and expecting them to deliver the same payback that the existing salespeople are delivering.”
3. High level of uncertainty, with no understanding of the problem. You simply don’t know what the problem is, let alone the solution. You can see that something is wrong, but have no clear idea about what to do. For example, “Our media division is losing business to internet start-ups,” “We are investing more in customer service, but our customers are becoming increasingly dissatisfied with us,” and “We are designing products that seem right for the marketplace, but the marketplace isn’t interested.”
Most of our problems tend to be in 1 or 2. Uncertainty, remember, happens when we fail to know the range of possible outcomes (and, correspondingly, their probabilities.) These are really messy problems.
Solving the problems of 1 and 2 are generally much easier. We use default thinking.
The default problem-solving model has its roots in what can be called instrumental rationalism. At the heart of the model is the belief that business problems can be solved through objective and scientific analysis and that evidence and facts should prevail over opinions and preferences. To get to the right answer, so the thinking goes, you should adhere to the following principles of problem solving:
1. All business uncertainties are defined as problems. Something in the past caused the problem, and the facts should be analyzed to clarify what the problem is and how to solve it.
2. Problems are deconstructed into quantifiable and formal problem statements (issues). For example, “Why is our profitability falling?”
3. Each problem is atomized into the smallest possible bits that can be analyzed separately— for example, breaking down the causes of profitability into logical issues. This analysis would include “issue trees” for all the hundreds of potential levers for either decreasing costs or growing revenue (customer segments, markets, market share, price, sales channels, operations, new business development, etc.)
4. A list of hypotheses to explain the cause of the problem is generated. For example, “We can increase profitability by lowering the cost of our operations.”
5. Data is gathered and processed to test each hypothesis— all possible stones are turned and no data source is left untouched.
6. Induction and deduction are used to test hypotheses, clarify the problem, and find the areas of intervention with the highest impact, or what is commonly called “bang for the buck.”
7. A well-organized structure of the analysis is deployed to build a logical and fact-based argument of what should be done. The structure is built like a pyramid that develops the supporting facts, some subconclusions, and an overall conclusion and then ends with a prioritized list of interventions to which the company should adhere.
8. All proposed actions are described as manageable work streams or must-win battles for which a responsible committee, or person, is assigned.
9. Performance metrics and a proposed time frame with follow-up monitoring are put in place for each committee to complete the task.
10. When all work streams have been completed, the problem is solved.
When done correctly by competent people, this can be a thing of beauty. This is, in part, why we hire consultants like myself, McKinsey, or Bain. We believe they can solve any problem. The idea that management is a type of science with a repeatable formula in the face of any problem is not a new idea. “It can be traced back to the nineteenth century, when positivism, the prevalent philosophy of the day, argued that you could objectively measure reality.” The founding father of the idea of management science, if there was one, Frederick Winslow Taylor.
Taylor left a prestigious education at Harvard to work at steel companies throughout Pennsylvania. Whereas most manufacturing and factory plants had cobbled together their organization through rules of thumb and common sense, Taylor was the quintessential positivist, seeking scientifically validated measurements, or properties. He followed workers, clicking his stopwatch every time they started and stopped, measuring the time it took to complete each discrete action of hauling their large iron ore loads. Through his enormously successful tenure at steel companies, he extracted generalized principles of management that he used to create the world’s first business case study. It wasn’t long before a partnership between Harvard’s School of Applied Science and its brand-new business school came calling. Might Taylor bring together his experience into something the school could teach its young students about productivity? Taylorism, based on the following premise, was born:
To work according to scientific laws, the management must take over and perform much of the work which is now left to the men; almost every act of the workman should be preceded by one or more preparatory acts of the management which enable him to do his work better and quicker than he otherwise could.
Today’s problems seem infinitely more complex than counting iron ore hauls and yet we still attack them with the same general approach of Taylorism. This is what most MBAs, including mine, taught: people work harder with the right incentives, optimize and perfect workflow, analyze every movement looking for efficiencies, remove discretion when possible because that creates variance, etc. Of course we’ve evolved Taylorism, today we call it “lean” and “six sigma” and whatever else.
For most of us, default thinking is so familiar to us— the very air we breathe— that we are no longer able to explain it or even to see it. For that reason, if we really want to understand why we continue to get people wrong, we need to unpack the fundamental assumptions that make up the culture of most of our days.
Is this really how we approach problems? What are the assumptions we’re making when we take this approach?
Assumption 1: People Are Rational And Fully Informed
One of the unintentional consequences of solving problems by testing logical hypotheses is that you are forced to assume that people are rational decision makers: aware of their needs, fully informed of all their choices, and capable of making the best choice. The reason is simple: it is very difficult to test a hypothesis about things that you can’t measure objectively. It’s even harder to test something that is deeply personal, cannot be decoded into explicit descriptions, and requires a lot of interpretation. Think about the question “Are you a good parent?” or “Do you have good taste?”
A simple answer misses most of what matters about parenting and good taste. To deal with this problem, companies base their problem solving on what can objectively be described, quantified, and analyzed without too much interpretation.
So we default to measuring perceptions and desires, more specifically, we end up with people’s perceptions of reality. There is nothing wrong with this but it is limited and we should be aware of its limitations. These are not the only two aspects of humanity that matter. And even if they were, the way default thinking solves problems rarely offers us any understanding of how they work. We find some spurious relationship, and assume causation when, in reality, it’s merely a correlation. When it changes we have no idea why. We’re more complex than that.
Most recent studies evaluating how people buy reveal us to be far more chaotic creatures. We rarely know what we want. We almost never fully grasp the market and, most important, we almost always buy something at a different price than what we thought we would. Even studies of people with written shopping lists (milk, eggs, apples, etc.) reveal that they find themselves far astray from their original intentions once they reach the grocery store.
We do this because intentions are relatively easy to study. But as Dr. House says, “everybody lies.”
People think they cook a lot, but they really don’t. It’s not that they want to lie to other people; they are simply lying to themselves.
There is often a wealth of distance between what people say and what people do.
It’s not that people don’t care about anything. They just don’t care as much as most companies assume that they do. And most often, people couldn’t care less. When they buy one kind of chocolate bar rather than another, it is rarely because they have a strong brand preference. More often than not, it is because the chocolate was closer on the counter, it had a color that fit the mood, or it simply came packaged as a “two for one.” The good news for companies is that we buy a lot of stuff. The bad news is that we don’t always know why.
Assumption 2: Tomorrow Will Look Like Today
A good example of this attitude can be found in a 2006 article in the McKinsey Quarterly. In identifying trends that will shape the business environment, the article says that management itself will shift from an art to a science:
Long gone is the day of the “gut instinct” management style. Today’s business leaders are adopting algorithmic decision-making techniques and using highly sophisticated software to run their organizations. Scientific management is moving from a skill that creates competitive advantage to an ante that gives companies the right to play the game.
We’re bombarded with the word “science.”
When thought leaders use the word science to describe a business discipline like marketing, retail design, negotiation skill, or strategy, we are led to believe that these disciplines can be predicated on scientific truths. Does the science of shopping have the same universal laws as Darwin’s theory of natural selection?
This is part of the reason we trick ourselves.
Rarely do we have to ask, “Where does the hypothesis come from?” But by assuming that the hypothesis is based on some kind of universal law, we fool ourselves into believing that the assumptions of the current moment will also hold true in the future. In these situations, the idea that management is a kind of natural science blinds us rather than enlightens us.
Assumption 3: Hypotheses are Objective and Unbiased
Here is a great example:
In the toy industry, the dominating idea is that children have a short attention span and need toys that stimulate their desire for instant traction. A toy, it is assumed, must grab the attention of the child in the store, and he or she should not need any skills to play with it. Another assumption is that physical toys are losing ground to digital toys because the former are too tedious and not stimulating enough.
In reality, when you study children— and if you read the majority of academic literature about children—you will probably reach the opposite conclusion: children are highly motivated by play experiences that require skill and mastery and that can give them a sense of hierarchy and accomplishment. Digital play is gaining in popularity precisely because it requires a very sophisticated skill set; it can be played for thousands of hours and it gives the players clear feedback with levels and hierarchies.
Over time, companies and people create “commonsense” ideas about the world and how it works. We take things as given and rarely challenge them.
The French anthropologist Pierre Bourdieu coined the term habitus to describe the somehow hidden but always present dispositions that shape our perceptions, thoughts, and actions. In his view, many things that we regard as common sense are in fact shaped by the social context we are in. Over time we learn what is normal and taken as a given through our social interaction with the world— our family, our society, our friends, our work— and our perceptions become a kind of automatic understanding of the world. This understanding enables us to act normally without really thinking about it. Over time, companies similarly create commonsense ideas about the world. Certain things are simply taken as a given, no longer contested: for example, the idea that designers and engineers will never see eye-to-eye, or that open offices provide more opportunities for collaboration.
This is one reason consultants can be effective, they come in with a different understanding and offer opinions—intentionally or not – that challenge some of these commonsensical views.
In terms of default thinking:
A company might think that it has created an objective set of possible hypotheses to test. But in reality each hypothesis is always based on something . Very often, that thing is a product of culture, not of science. And once our assumptions are firmly rooted in our cultural understanding, they have a way of becoming ever more entrenched.
Then confirmation bias kicks in. We look for opinions, ideas, and facts that support our beliefs.
In the end our hypotheses are “almost never based on objective truth.” How could it be otherwise? But of course the point is to know the limitations of the tools we’re working with and hope that awareness allows us to make better decisions by using better tools.
In Leo Tolstoy’s nonfiction magnum opus The Kingdom of God Is Within You, he writes:
“The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid before him.”
If you can’t question assumptions at your company, you probably can’t question anything.
Assumption 4: Numbers Are the Only Truth
“Not everything that can be counted counts, and not everything that counts can be counted.” — Albert Einstein
The heart of default problem solving is quantitative analysis.
It has become so dominant that companies tend to forget that the world consists not only of quantities but also of qualities. Roger Martin, the dean of Rothman School of Management, argues that companies will simply lack ability to find the full potential of growth opportunities if they only focus on quantitative models: “The greatest weakness of the quantitative approach is that it decontextualizes human behavior, removing an event from its real-world setting and ignoring the effects of variables not included in the model.”
Default thinking catalogs the world into properties: how big is the market, how many people will buy our products, how many people know our brand, which category is growing fastest, which geography is the most profitable, which customers have the highest loyalty and what technologies have the highest adoption.
Yes all of those have a numbers side but they also have a qualitative side that might also shed light on things. If you know that a certain percent of your customers are happy with their interactions with your company, that’s different than knowing what the experience of interacting with your company is like. Both of those things are needed to inform decisions.
Numbers are great for covering your ass so they tend to trump anything else. Numbers however, limit ideas and solutions to only one right answer.
For obvious reasons, the past does not include data on things that haven’t happened or ideas that have not yet been imagined. As a result, data analysis of the future tends to underestimate or even ignore past events or conditions that can’t be measured while overestimating those that can. Nowhere is this more visible than in business case studies.
“In our view,” the authors write,” the quantitative obsession leads to a sorely diminished approach to future planning. It tends to be conservative rather than creative because it implicitly favors what can be measured over what cannot.”
Assumption 5: Language Needs to be Dehumanizing
Business and management science has become a world in itself, and the language of business has become increasingly technical, introverted, and coded. You don’t fire people anymore; you “right-size the organization.” You don’t do the easiest things first; you “pick the low-hanging fruit.” You don’t look at where you sell your products; you “evaluate your channel mix.” You don’t promote people; you “leverage your human resources.” You don’t give people a bonus check; you “incentivize.” You don’t do stuff; you “execute.” You “synergize, optimize, leverage, simplify, utilize, transform, enhance, and reengineer.” You avoid “boiling the ocean, missing the paradigm shift, having tunnel vision, and increasing complexity.” You make sure that “resources are allocated to leverage synergies across organizational boundaries and with a customer-centric mind-set that can secure a premium position while targeting white spots in the blue ocean to ensure that there is bang for the buck.” It can become almost poetic.
Talk about jargon.
The German philosopher Jürgen Habermas has developed an extensive analysis of what happens when technical language outstrips the language of everyday life. He argues that the change from a normal, everyday language to a technical, specific language suggests a shift in power. When technical language conquers simple language of the every day, it is a sign that the system is gaining ground and everyday human reality, what he calls the lifeworld, is losing ground. He goes so far as to call this shift a colonization of the lifeworld; everyday life being colonized by a force of bureaucratization and rationalization that it cannot defend itself against. Such a shift leads to a far more systematic, rule-based, and technical idea of the world. It widens the gap between who we really are and the systems that we have become.
* * *
Of course, default thinking doesn’t always work. You know you’ve stepped out of default thinking space when leaders say “think outside the box.” Problems arise when you try to solve the third type of problem (where there is a high level of uncertainty) with the same thinking you use to fix problems in one and two.
If you enjoyed this post, you’d love the book The Moment of Clarity: Using the Human Sciences to Solve Your Toughest Business Problems.