Organizations use many different strategies to make decisions. The following is a list of strategies seen in practice. They all lead to decisions, but they all run the risk of not making good use of the available people and information. Have you ever experienced these?1. Decision by Running Out of Time:
This is the most common form of decision-making. There may be some effort to develop criteria and alternatives, but often time runs out before there is any effort to ensure that a robust decision is made.2. Decision by chaos
: The president of the company says, “I want our new product at the Atlanta trade show in two weeks”. The show is in two weeks. There is no rational way to prepare for the show and make robust decisions. The decisions made in the chaos may need revisiting after the show and work will need to be redone. Some people prefer to work in a chaotic environment, and when in positions of power, they will manufacture chaos as the working environment.3. Decision by Fiat (or Decision by Authority):
This is a very common style in autocratic organizations where a manager or someone else in authority decrees that a certain alternative is his/her favorite. It is often seen when the boss’s idea is chosen in order to preserve the relationship with him/her. This is more justification than decision-making.4. Decision by Coercion:
A champion for one alternative pressures his/her colleagues into submission. Often the loudest voice wins, the others having given up. One colleague referred to this style as “hijacking the process.5. Decision by Competition:
Here concern for who wins is most important, as instanced in most sports. This is often a win-lose situation and the relationship among individual team members is not important.6. Decision by Voting:
Democracy works, but does not often make the best possible choice. This decision making process is a weak form of compromise. Think how most products and businesses would operate if they were designed the same way we elect a president.7. Decision by Inertia
: This style is based on “We did it that way before” which may result in a robust decision, if the previous one was. But, not much progress or innovation is made using this style. Sometimes the tough decision is knowing when to innovate, and when to keep the cruise control on.
I have seen these all. Do you know of other dysfunctional decision making practices, ones that you have experienced?
What can you do to defuse these styles? The first step, like any good program is to realize that there is a problem and that put a name to it. Hopefully, this list can get you through that step. The second is to get general agreement that decision-making is a process. But, both of these steps may be hard as some managers don't want any rationality in the process. There are many ideas in my other blogs and on my web site. Do you have ways of managing these?
Labels: decision making, decision making process
Bad decisions don’t stick – the issue get revisited again and again, people who agreed with the decision do something else, and resources get wasted on a bad idea. Here are four surefire ways to make a bad decision.1. Make decisions too soon
Although books like the very popular Blink suggest that generally your first reaction is right, there is much proof to the contrary. Sure, if the house is burning and your first thought is to get out, that is a good thing. But, if your decision has many options, multiple factors to consider, or has the needed knowledge spread amongst multiple people, your knee-jerk decision is probably a poor one.
But, Paul Nutt in his book Why Decisions Fail, a study of over 400 business decisions found a leading cause of decision failure is “premature commitment”. One area where matching decisions to time is in Agile Software Programming which encourages “delaying commitments to the last responsible moment”. A great cartoon by Chris Matts that explains this methodology.
2. Ignore uncertainty
Any decision based on estimates about the future are uncertain and virtually all technical and business decisions are of this type. But, virtually all estimates suck. In an earlier blog post I told of the result of a simple experiment that showed people can’t even make simple estimates, much less real business estimates. There are no “accurate estimates” as an estimate is a distribution with a most likely value and uncertainty about that which is inherent in any real project.
If I ask you to estimate the time it will take to you to get to work, what will you tell me? You could say, “Oh about 15 minutes”, but is this the average time, the longest or the shortest. I wont know unless you give me more information. If I am planning on meeting you at work and I want to be 90% you will be there, then I need to know more about your commute than “15 minutes”.
3. Don’t itemize the important factors
It isn’t really clear how people naturally make decisions. One theory is that you use only the most important factor that can help you differentiate amongst the options. So, for example, if you want to buy a new car, and the most important factor is that it be red, then you choose the red car. If there is more than one red car available, then you use the next most important factor (e.g. cost or body style) and so on. This model is a little simplistic, but it does emphasize that you need to understand the factors to make a decision.
There are methods that can help a team, even a team with little agreement about what is important develop a set of factors that will help make a good decision .
4. Be blind to available information
A team might consider all the options and important factors to develop a robust decision. They write up their recommendation, send it to the ultimate “decider” and he ignores their results and makes a different decision. What a waste of time and money. One manager told me that he didn’t follow the recommendation because the team was not privy to all the information he had. This only compounds the sin.
Practice all four of these and you are just about guaranteed to make weak decisions. How will you know? Your decisions won’t stick - they will be revisited again, people will implement actions not chosen, or the results of the decision will be invisible next week.
If you have examples of these methods please share them.
If you know of other surefire ways to make bad decisions then please share these also, so they can be added to this list.
Labels: Blink, decision making, poor decisions, uncertainty
The new economy forces new practices in making critical business decisions. Cost overruns, late to market, and rescoping all lead to inefficiencies that can no longer be tolerated. One approach to improve performance in these areas is to ensure that your decision-making processes lead to the best choices in an efficient manner every time.
Symptoms of poor decision practice are:
- Decisions take too long – some are discussed, shelved, discussed again a year later, with no resolution
- Meetings end with no clear direction forward – decisions aren’t made and actions not taken
- Firefighting dominates useful work – with some fires clearly caused by poor earlier decisions
- Projects championed by the strong dominate what is best for the organization
- Decisions come unstuck – you decide what to do next, everyone agrees, and then something different happens
- Decisions are made without using all of available information and you know it
- Risk is ignored or padded over – all decisions are based on uncertain information and thus are risky.
It is estimated that half of all decisions fail. By “fail” I mean that some time after the “decision” was made, there is no evidence that any effort went into making it, i.e. nothing changed. The only evidence that any effort was put into the decision was the expenditure of time and money. A decision is a call to action and if no useful action occurs, the time and effort that went into making it was wasted, or worse.
The risk of making a poor decision, one that isn’t actionable and doesn’t stick, can be reduced through decision-management. Decision management is a process whose goal is to use the available uncertain, incomplete, conflicting, and evolving information to make the best possible choices with known expected risks, within time and resources available. Some of the basics of decision management are:
- Decision-management begins with assuring that everyone is addressing the same issue. This isn’t a given. I have been in many meetings where everyone thought they were addressing the same topic, but weren’t. Even if generally on topic, everyone is making different assumptions and these must be made explicit. In watching the debates on bailouts and stimulus packages in early February 2009, it was clear that the issue is not well understood or agreed to.
- To make a decision, there must be more than one alternative. If there is only one alternative and rejecting it is not an option, then the effort is for justification, not decision making. Decision-management helps develop alternative courses of action. The Wall Street bailout program of late 2008 seems to be an instance of rubber stamping a single alternative.
- A major component of decision-management is developing a clear picture of how you will know if you have a good decision - what should be measured – what is important and to whom is it important. Determining stakeholders’ values before diving in too deep is critical if you want a result that everyone can buy into. The lack of clarity with the stimulus packages being debated in Washington seems to hinge on a clear picture of what is to be measured (e.g. jobs created, home foreclosures saved, etc.). Good decision-management helps to develop measures, and at the same time, honors differences of opinion about which measures are important. This might be helpful in Washington.
- All decisions are based on estimates that are uncertain and yet we do a very poor job of taking this uncertainty into account. Past performance may be known (if it was documented), the present is obscured by its immediacy, and the future is just the best guess. In other words, very little is known with certainty. Adding uncertainty at the end of a decision making process with a “what-if” discussion is too late. Decision-management helps identify and capture the information uncertainty at the beginning of the process in an effort to produce a robust decision, one that is as immune as possible to the uncertainty.
- A key part of decision management is a process to fuse together all of the information is a timely and useful manner. Sometimes, there is need for a fast decision and sometimes, if the stakes are high and time is available, more effort needs to be placed on issue, alternative, measure, and estimate development. Good decision-management controls the time spent versus the depth of analysis to reach a decision.
In the 1990s most organizations began to review their processes to make them more efficient, agile and manageable. Making processes more efficient is mandatory in the new economy. However, many find that as they refine their processes, inefficiencies occur with poor decisions. In other words, efficient processes need efficient decision-management. Good decision-management is possible in most organizations and can alleviate many of the symptoms itemized at the beginning of this article.
Labels: decision making, decision making maturity, decision managment, new economy
The term “risk management” has been around for a long time in financial, technical and medical practice. It is a term that is very loosely used and I want to dive in with a decision-centric view just to further muddy the waters.
A good place to begin is with a formal definition of “risk”. If you enter “risk definition” into Google you will get over twenty-five definitions; some are redundant, and there is little consistency. Regardless of the definition, risk traditionally amounts to answering three questions:
- What can go wrong? Some event.
- How likely is it to happen? Probability that the event happens based past statistics, an analytical model of the event, or best guess based on experience.
- What are the consequences? Money, time, and possibly even lives are wasted.
The above set of questions describes event risk. They are the basis of technical risk evaluation. However, most managers are concerned with decision risk rather than event risk. For decision risk, the questions managers really want answered are:
- What can go wrong if I choose alternative X?
- How likely is it?
- What is the impact?
(Brian Seitz of Microsoft articulated these as cleanly as shown here)
Note that these are the same three questions as with event risk, just slightly tweaked. Regardless of whether focused on an event or a decision, risk management, by definition needs to be effort directed at some or all of these questions.
The first question raises the issue of how we know our choice was poor. In the book Why Decisions Fail, the definition of a poor decision is one that had no positive impact after two years. There are other measures of a failed decision. For example, our time is up and no satisfactory alternative has been developed. Or not everyone on the team agrees with the choice made and some team members feel disenfranchised. More often, the result of a poor choice is not known until much later. For example, if we choose a bad restaurant, we will not know until we have eaten there. Or if an engineer makes a poor decision on what material to use for a product, this may not be evident until the customers have used the product for a number of years.
One thing is consistent in this discussion: Without uncertainty there is no risk. A corollary is that the more uncertainty, the higher the risk of making a poor decision. “What can go wrong?” is that one or more of the criteria are not satisfied. “How likely is it?” is directly dependent on our certainty during the alternative evaluation. We may know from past experience or data that the probability of something failing is XX%. But, this probability may be compounded by other uncertainties such as lack of knowledge, disagreement amongst team members or incomplete data. And “What is the impact?” is that the alternative chosen no longer is as good as it was originally thought.
In order to manage risk during decision making:
- You must address risk during decision making, not as a task to complete after you have selected an alternative. This is because decision risk is a measure of your lack of knowledge, as well as other uncertainties you need to consider when you make a decision.
- There is risk associated with every feature of every alternative. Traditional risk assessment separately addresses financial risk, performance risk, and schedule risk. But when including risk as a part of the decision-making process, you must integrate the uncertainty inherent in all the features at once, because it is the combination of them that drives your decision.
- You can get a good assessment of uncertainty, and thus risk, by fusing the evaluations of all the members of your team.
Labels: decision making, decision risk, risk management
I am reading “Straight Choices: The Psychology of Decision Making” by Benjamin Newell, David Lagnado and David Shanks (Psychology Press 2006). It is a very good summary of decision making from the viewpoint of cognitive psychology. It makes some muddy topics very clear. However, it totally fails in Chapter 14, Group Decision Making. This isn’t the authors’ fault – all work on team or group decision-making misses the main points. I will get to these in a moment.
First, some background. I attempted to address this topic in Chapter 5 of “Making Robust Decisions”, cleverly titled “Teams Don’t Make Decisions, But…” The title reflects the problem with the topic. In business and technology there is always one person who signs off on a project to move it forward. This manager is the ultimate “decision maker”. But, if this person is good at what they do, for complex problems they have a team that is studying the problem and advising them about what choice to make. On this team, some of the people know about some of the information, they all have different fields of expertise and knowledge. For complex problems, no one person can know all about all of the important features of all the alternatives. Further, they bring a range of viewpoints about what is important.
So, is the manager the decision maker or the team? Depends how you draw the line around the term “decision”. If it is an event, then the manager is the decision maker. If decision-making is a process, then it is the team. I see decision-making as a process. The cognitive psychologists seem to see a decision as the event.
In “Straight Choices”, the authors summarize research on group decision making. All of the studies they sight are for very simple problems, not for problems with distributed knowledge. In other words, the toy problems the psychologists have used to study teams do not reflect what happens in business and technology. They tend to focus on the "right" answer to simple problems with a known solution. This why Chapter 14 was such a let down.
From my reading, the two main reasons to use a team are:
- Obtain the best information when no one person can know all that is needed to be known
- Build stakeholder buy-in and accountability
These goals mean that you need to:
- Build an environment and a team strategy that fosters communication of the right information and buy-in
- Suppress the effect of differences in cognitive styles (“suppress” is not the right word, but you want to level the playing field so the alphas don’t dominate, the closers don’t stop things too soon, the wafflers don’t lead to team paralysis and so forth.)
- Guard against group think
- Help build a shared understanding
Of the six topics itemized above, Chapter 14 only addresses Group Think. It comes close to talking about a shared understanding when it discusses consensus, but consensus is not important for a decision buy-in. (Quoting Margaret Thatcher, "To me, consensus seems to be the process of abandoning all beliefs, principles, values and policies. So it is something in which no one believes and to which no one objects."). The two main goals are never discussed. It seems the cognitive research has focused on getting the “right” answers to toy problems. Most real decisions have no right answer, so the psychologists are asking the wrong questions.
In the authors’ defense, they end the chapter with “Research on group decision making is both appealing and frustrating”. In spite of this frustration, decisions are made by teams every day and I believe the key to robust team decisions is in the six items listed above. These are discussed at length in Making Robust Decisions. I just wish there was more good research on each of them so you don’t have to take my word for it.
Labels: cognitive limitations in decision making, decision making, group decision making, team decision making
I just returned from an ASME
(American Society of Mechanical Engineers) meeting in NYC. During my time there I discussed the concept of decision making maturity with three different groups and thought it worth writing about. Best to do in context with my career.
When I was trained as an engineer, I focused on how components
and assemblies were shaped, manufactured and functioned. This element-centric view of the world is not at all unique to engineers. Businesses focus on documents, e.g. POs, recepts, memos, contracts, etc)
By the 1980s I became fascinated
with the process
of developing the elements. This process-centric view is recent
. Sure engineers have studied and developed chemical and manufacturing processes for years, but the concept of product design processes and business processes is recent
. As evidence of this, in 1990, I wrote an engineering text book about how to progress from customer need to produced product. I debated long and hard about what to title it. I finally landed on "The Mechanical
Design Process", a title that proved to be right on the mark (note that its 4th
edition will be out in January 2009). The use of the word "process" in the title was problematic because it was only beginning
to be used to discuss product evolution.
In the early 1990s my research was about how to capture and manage the evolution of products, the rationale for form and function. This was process-centric, but not getting anywhere. In about 1995, it dawned on me that "design is the evolution of information, punctuated by decisions". That started me on a decision-centric view of the business and technical worlds.
My maturity from element through process to decision is being taken by industry. While in NYC I met with a PLM
Management) guru. PLM
grew out of CAD (Computer Aided Design) which was all about parts and assemblies - element-centric. PLmis currently focused on the process, i.e. the lifecycle
of products. I have been trying to sell decision-centric capabilities into PLM
since 2001 with no success. The push back
has always been "we aren't
ready for that yet" and they weren't. Now there is the beginning
of interest. The product development industry is maturing through process to realize that business and technology progress is punctuated by decisions and it is the quality of those decisions that determine the product and business success.
Further, when I talked about this decision-centric view of the world five years ago to industires, audiences had no idea what I was talking about. Now I get good awareness and it is building. There is yet hope for "evolution punctuated by decisions".
Labels: decision making, decision making maturity, decision making process
Many problems revolve around choosing the best hypothesis - the one that best explains some observations. This is a cornerstone of science and medicine, and is the day-to-day reality of intelligence agencies. Choosing the best hypothesis is a form of decision making. But there are some differences from traditional decision making problems as I discussed in an earlier blog
I want to compare two approaches to hypothesis-driven thinking. The first is in a paper by Jeanne Liedtka, of the University of Virginia, Darden School of Business, "Using Hypothesis-Driven Thinking in Strategy Consulting
". The second is Richards Heuer's work for the CIA called Analysis of Competing Hypotheses
(ACH) (Chapter 8 of his book "The Psychology of Intelligence Analysis
Liedtka tunes her process to consulting and states "The traditional decision-making process that we are most familiar with in business involves a linear method of thinking in which the problem is defined, a comprehensive range of alternative solutions is generated and evaluated, and the optimal one is selected. In contrast, the hypothesis driven approach, associated with the scientific method selects the most promising hypothetical solution early in the process and seeks to confirm or refute it" She goes on to out line the following steps:
- Define the problem
- Gather preliminary data that allow construction of initial hypotheses about the causes of the problem
- Develop a set of competing hypotheses about the causes
- Select the most promising
- Identify analyses to study these
- Collect data and test the hypotheses
- Reformulate as needed
Admittedly, I have paraphrased and shortened the description of these steps, but I have not lost what the paper says to do. When I read this, I get stuck on items 3 and 4. In step 3, what is the difference between "developing a set" and the linear method decried in the quotation? In Set 4, how should you "select the most promising"?
Heuer on the other hand suggests that you should:
- Identify possible hypotheses
- Make a list of significant evidence for/against
- Prepare a Hypothesis X Evidence matrix
- Refine matrix. Delete evidence and arguments that have no diagnosticity
- Draw tentative conclusions about relative likelihoods. Try to disprove hypotheses
- Analyze sensitivity to critical evidential items
- Report conclusions.Identify milestones for future observations
These two processes are very similar. Where Liedtka, in the quotation beats up on developing multiple hypotheses, her method encourages more than one to be tested at a time. Heuer, on the other hand begins with "identify multiple hypotheses". He never states that this list needs to be complete, just that there needs to be more than one.
My research into how people develop products has shown, that designers that only develop a single alternative for a problem usually get into trouble because they have nothing to compare their efforts to and no secondary course of action, if the first idea hits a road block. When hypothesis driven, it is somewhat the same. You can develop a single hypotheses and test it. If it fails, you can develop a second one, maybe even using some of the information developed during the first effort. However, this is fragile as you may be headed in the wrong direction.
A better approach is more like what Heuer suggests. Develop multiple hypotheses and then use measures (i.e evidence) to try to disprove them. Heuer never says that the the list of hypotheses needs to be extensive or complete, just multiple.
One additional thought- as evidence is generally uncertain then you just take great care in using it as sufficient to eliminate a hypothesis. This is why multiple evidence is generally best. The good news is this is the way we naturally operate. For example, I was approached last week by a man with a business idea close to something I was already interest in. My initial hypotheses about him (in retrospect) were, 1) he had a good idea, was credible and I should invest time with him, and 2) he was a flake and I should waste little time with him. I began with hypothesis #1 and began to collect evidence by talking with him. After a reasonable time, I decided he passed the "sniff test" and modified my hypotheses so I could dig deeper into the business proposition. However, I have not yet totally eliminated hypothesis #2.
Labels: ACH, decision making, hypothesis driven approach, hypothesis driven decisions
I was just gave a presentation at the IG's (Inspector General's) office. They are concerned about the risks involved when they make decisions. What they don't realize is that there are two kinds of risk they have to worry about, event risk and decision risk.
Event risk is what most people mean they talk about "risk". It is the expected value of an event, its undesirable consequences and probability of its occurrence. Determining risk amounts to answering:
- What can go wrong? –An event occurs that may have bad consequences
- How likely is it? – Probability dependent on past statistics and model results
- What are the consequences? –Money, time and possibly lives are wasted
NASA and others have entire handbooks on assessing event risk.
During decision-making, risks are inherent in uncertain knowledge, information and models. Uncertainty creates the risk that a poor decision will be made. This doesn't say that the alternative chosen will fail, that is even risk. Drawing analogy to event risk, decision risk focuses on:
- What can go wrong? – A poor choice is made
- How likely is it? – Probability dependent on uncertain knowledge, and the fusion of the team’s interpretation of information and models
- What are the consequences? –Money, time and possibly lives are wasted
One problem the IG wants to address is selecting new employees. Clearly the risk here is a decision risk - they want to ensure that they don't make a poor hiring choice. They also want to manage their portfolio of projects. Here the risk that the project can go wrong affects the risk that they make a poor decision. The higher the event risk associated with an option, the higher the decision risk may be.
Both types of risk are based on probabilities. However, traditional probability methods (often called frequentist methods) are good fro event risk, but are not capable of managing knowledge uncertainty. Rather, Bayesian probability methods are specifically designed to integrate accumulating, uncertain, incomplete and conflicting knowledge.
Can I convince the IG folks of this? We will see.
Labels: decision making, decision risk, event risk
I have begun to work on intelligence agency decision-making and, in doing so, realized there were two different types. I have never seen this decision-making dichotomy written up anywhere and cant find it in the literature (any literature).
The first decision-making formulation is the standard process where the goal is to choose the best alternative from a list. This is facilitated by comparing each alternative to a set criteria. Each is measured relative to each criterion and its success in meeting it is combined (either formally or informally) to find the overall satisfaction with the alternative. For example, say you are buying a new car. One criteria is that the car should accelerate for 0 to 60 in some fast time and a second is that it should get better than xx miles/gallon. Information with which to evaluate each of these may come from different sources. I may actually measure the acceleration of one car but rely on the test figure in a magazine for another. In this way the evaluations for acceleration are independent from one alternative to the next. The mathematics for this is referred to as Multi-Attribute Utility Theory (MAUT). Methods based on MAUT that support decision-making are the decision matrix, Pugh's method, Expert Choice and Accord
The second formulation is what is seen in medical and system diagnosis, and business and government intelligence. This has not been formalized as best I can tell. Here the goal is to choose the most likely hypothesis. A hypothesis is like an alternative. The big difference here is that , as each piece of evidence is gathered and analyzed, it either supports, denys or has no import on each of the hypotheses. This differs from the first formulation in that a single evaluation potentially adds information to all the hypotheses (instead of independent). For example, supposed we want to determine Iran's nuclear intentions. Hypotheses include: 1) generate power only, 2) develop a nuclear weapon for ground delivery, or 3) develop a nuclear weapon for air delivery. A piece of evidence, say a satellite photo of activity at an airbase, contributes information to all three of the hypotheses.
The standard MAUT formulation does not support this second type of decision-making and neither do the tools listed above. An aside, the medical profession uses the the term "evidence based" to mean making clinical data available to practitioners in a usable manner so that the doctor trying to figure out why your finger is rotting off has all the best clinical data on which to base her diagnosis. This does not really support choosing which of the hypothetical diseases you actually have, but supplies information on which to evaluate the evidence.
Situations like what I have just described can be modeled with methods like Bayes Nets, but there is no known method for facilitating the process as with the decision matirx, Pugh's and Accord. I have spent months working on this and have found little in the literature. Do you have any leads?
Labels: decision making, evidence based decisions, MAUT
I track the term "decision making". Like most terms the more you think about it, the more you realize that it is used rather loosely. The word "decision" itself is really a noun but is often used in its verb form. As a noun, it is the result of a process (to be discussed in a minute) that is a call to action. I have decided to marry Adele. I have decided to walk through the doorway. I have decided that, indeed grass is green.
When people use the term "decision" they often mean the process of making a decision. I am making a decision about which restaurant to eat at. The team is working on that decision. We often qualify the activity of choosing a course of action by using the term "decision-making".
Decision-making implies that a choice is to be made - i.e. there must be two or more alternative courses of action. However, often the process is about justification for a single course of action and the activity is focused on convincing someone that that this is right course to take. (think politics and decision making processes used in attacking the Bay of Pigs and Iraq).
I came across another curious use of "decision-making". I found a web site titled "Test Your Decision Making
" that has you do a short and very simple exercise to guess the rule that governs the sequence 2,4,6. This is on the Yale University web site. The answer is simple (it is a sequence) and the explanation centers on "To test a hypothesis, try to disprove it." I frankly found the discussion confusing, but it at least it was about testing hypotheses (potential courses of action) in order to choose the best. However, the point about testing each one until I found a counter example seems daunting for anything other than simple, toy problems like the one presented.
Labels: decision making
A nice summary of how results can be anchored just appeared on the web
. This human behavior has great implications in business and technical decisions.
Anchoring sets a biased context for estimation. It is a cognitive limitation that affects the quality of our decisions. Anchoring occurs, for example, when a manager asks for an estimate with something like: "I don’t see how we could commit more than $10,000 to this." $10,000 now becomes the anchor point. This stated amount biases all the following estimates that are generated.
Anchoring can happen in subtle ways. Let’s say you are bidding on a project and you have been led to believe that the customer has a ceiling of, say, $10,000. You are now anchored to this value and will make decisions to try to force your project to fit it. This only seems logical, but it has interesting effects. First, the amount of work you propose will be descoped to fit the budget. But people always are optimistic, so, if you get the job, you will still have more work than money. Then begins the dance of working more for less money (overtime), further descoping or asking for more money. This dance is further discussed on pages 82-85 in Making Robust Decisions
To demonstrate anchoring, I gave a group of people a simple estimation problem - asking them how long it would take them to wash a list of dishes. I described the dishes in detail, how dirty they were, and what "wash" meant. The mean estimate was 32 min with a standard deviation of 10 min. I then asked another group of subjects to estimate how long it would take to clean the dishes exactly as before, but this time I added "Your partner has told you that the kitchen needs to be clean in 15 minutes." This anchoring resulted in a new distribution with a mean of 18 minutes and a standard deviation of 6 min.
Think of the implications on decision making. All decisions are based on best estimates of past performance, assessments of the current situation, and visions of the future. Every one of these can be clouded by anchoring. You cant totally avoid it. However, you can be aware of how you word your need for information and consciously try not to anchor estimates on which decisions will be based.
Labels: anchoring, cognitive limitations in decision making, decision making, estimates