Many problems revolve around choosing the best hypothesis - the one that best explains some observations. This is a cornerstone of science and medicine, and is the day-to-day reality of intelligence agencies. Choosing the best hypothesis is a form of decision making. But there are some differences from traditional decision making problems as I discussed in an earlier blog
I want to compare two approaches to hypothesis-driven thinking. The first is in a paper by Jeanne Liedtka, of the University of Virginia, Darden School of Business, "Using Hypothesis-Driven Thinking in Strategy Consulting
". The second is Richards Heuer's work for the CIA called Analysis of Competing Hypotheses
(ACH) (Chapter 8 of his book "The Psychology of Intelligence Analysis
Liedtka tunes her process to consulting and states "The traditional decision-making process that we are most familiar with in business involves a linear method of thinking in which the problem is defined, a comprehensive range of alternative solutions is generated and evaluated, and the optimal one is selected. In contrast, the hypothesis driven approach, associated with the scientific method selects the most promising hypothetical solution early in the process and seeks to confirm or refute it" She goes on to out line the following steps:
- Define the problem
- Gather preliminary data that allow construction of initial hypotheses about the causes of the problem
- Develop a set of competing hypotheses about the causes
- Select the most promising
- Identify analyses to study these
- Collect data and test the hypotheses
- Reformulate as needed
Admittedly, I have paraphrased and shortened the description of these steps, but I have not lost what the paper says to do. When I read this, I get stuck on items 3 and 4. In step 3, what is the difference between "developing a set" and the linear method decried in the quotation? In Set 4, how should you "select the most promising"?
Heuer on the other hand suggests that you should:
- Identify possible hypotheses
- Make a list of significant evidence for/against
- Prepare a Hypothesis X Evidence matrix
- Refine matrix. Delete evidence and arguments that have no diagnosticity
- Draw tentative conclusions about relative likelihoods. Try to disprove hypotheses
- Analyze sensitivity to critical evidential items
- Report conclusions.Identify milestones for future observations
These two processes are very similar. Where Liedtka, in the quotation beats up on developing multiple hypotheses, her method encourages more than one to be tested at a time. Heuer, on the other hand begins with "identify multiple hypotheses". He never states that this list needs to be complete, just that there needs to be more than one.
My research into how people develop products has shown, that designers that only develop a single alternative for a problem usually get into trouble because they have nothing to compare their efforts to and no secondary course of action, if the first idea hits a road block. When hypothesis driven, it is somewhat the same. You can develop a single hypotheses and test it. If it fails, you can develop a second one, maybe even using some of the information developed during the first effort. However, this is fragile as you may be headed in the wrong direction.
A better approach is more like what Heuer suggests. Develop multiple hypotheses and then use measures (i.e evidence) to try to disprove them. Heuer never says that the the list of hypotheses needs to be extensive or complete, just multiple.
One additional thought- as evidence is generally uncertain then you just take great care in using it as sufficient to eliminate a hypothesis. This is why multiple evidence is generally best. The good news is this is the way we naturally operate. For example, I was approached last week by a man with a business idea close to something I was already interest in. My initial hypotheses about him (in retrospect) were, 1) he had a good idea, was credible and I should invest time with him, and 2) he was a flake and I should waste little time with him. I began with hypothesis #1 and began to collect evidence by talking with him. After a reasonable time, I decided he passed the "sniff test" and modified my hypotheses so I could dig deeper into the business proposition. However, I have not yet totally eliminated hypothesis #2.
Labels: ACH, decision making, hypothesis driven approach, hypothesis driven decisions
I was just gave a presentation at the IG's (Inspector General's) office. They are concerned about the risks involved when they make decisions. What they don't realize is that there are two kinds of risk they have to worry about, event risk and decision risk.
Event risk is what most people mean they talk about "risk". It is the expected value of an event, its undesirable consequences and probability of its occurrence. Determining risk amounts to answering:
- What can go wrong? –An event occurs that may have bad consequences
- How likely is it? – Probability dependent on past statistics and model results
- What are the consequences? –Money, time and possibly lives are wasted
NASA and others have entire handbooks on assessing event risk.
During decision-making, risks are inherent in uncertain knowledge, information and models. Uncertainty creates the risk that a poor decision will be made. This doesn't say that the alternative chosen will fail, that is even risk. Drawing analogy to event risk, decision risk focuses on:
- What can go wrong? – A poor choice is made
- How likely is it? – Probability dependent on uncertain knowledge, and the fusion of the team’s interpretation of information and models
- What are the consequences? –Money, time and possibly lives are wasted
One problem the IG wants to address is selecting new employees. Clearly the risk here is a decision risk - they want to ensure that they don't make a poor hiring choice. They also want to manage their portfolio of projects. Here the risk that the project can go wrong affects the risk that they make a poor decision. The higher the event risk associated with an option, the higher the decision risk may be.
Both types of risk are based on probabilities. However, traditional probability methods (often called frequentist methods) are good fro event risk, but are not capable of managing knowledge uncertainty. Rather, Bayesian probability methods are specifically designed to integrate accumulating, uncertain, incomplete and conflicting knowledge.
Can I convince the IG folks of this? We will see.
Labels: decision making, decision risk, event risk
I have begun to work on intelligence agency decision-making and, in doing so, realized there were two different types. I have never seen this decision-making dichotomy written up anywhere and cant find it in the literature (any literature).
The first decision-making formulation is the standard process where the goal is to choose the best alternative from a list. This is facilitated by comparing each alternative to a set criteria. Each is measured relative to each criterion and its success in meeting it is combined (either formally or informally) to find the overall satisfaction with the alternative. For example, say you are buying a new car. One criteria is that the car should accelerate for 0 to 60 in some fast time and a second is that it should get better than xx miles/gallon. Information with which to evaluate each of these may come from different sources. I may actually measure the acceleration of one car but rely on the test figure in a magazine for another. In this way the evaluations for acceleration are independent from one alternative to the next. The mathematics for this is referred to as Multi-Attribute Utility Theory (MAUT). Methods based on MAUT that support decision-making are the decision matrix, Pugh's method, Expert Choice and Accord
The second formulation is what is seen in medical and system diagnosis, and business and government intelligence. This has not been formalized as best I can tell. Here the goal is to choose the most likely hypothesis. A hypothesis is like an alternative. The big difference here is that , as each piece of evidence is gathered and analyzed, it either supports, denys or has no import on each of the hypotheses. This differs from the first formulation in that a single evaluation potentially adds information to all the hypotheses (instead of independent). For example, supposed we want to determine Iran's nuclear intentions. Hypotheses include: 1) generate power only, 2) develop a nuclear weapon for ground delivery, or 3) develop a nuclear weapon for air delivery. A piece of evidence, say a satellite photo of activity at an airbase, contributes information to all three of the hypotheses.
The standard MAUT formulation does not support this second type of decision-making and neither do the tools listed above. An aside, the medical profession uses the the term "evidence based" to mean making clinical data available to practitioners in a usable manner so that the doctor trying to figure out why your finger is rotting off has all the best clinical data on which to base her diagnosis. This does not really support choosing which of the hypothetical diseases you actually have, but supplies information on which to evaluate the evidence.
Situations like what I have just described can be modeled with methods like Bayes Nets, but there is no known method for facilitating the process as with the decision matirx, Pugh's and Accord. I have spent months working on this and have found little in the literature. Do you have any leads?
Labels: decision making, evidence based decisions, MAUT
I track the term "decision making". Like most terms the more you think about it, the more you realize that it is used rather loosely. The word "decision" itself is really a noun but is often used in its verb form. As a noun, it is the result of a process (to be discussed in a minute) that is a call to action. I have decided to marry Adele. I have decided to walk through the doorway. I have decided that, indeed grass is green.
When people use the term "decision" they often mean the process of making a decision. I am making a decision about which restaurant to eat at. The team is working on that decision. We often qualify the activity of choosing a course of action by using the term "decision-making".
Decision-making implies that a choice is to be made - i.e. there must be two or more alternative courses of action. However, often the process is about justification for a single course of action and the activity is focused on convincing someone that that this is right course to take. (think politics and decision making processes used in attacking the Bay of Pigs and Iraq).
I came across another curious use of "decision-making". I found a web site titled "Test Your Decision Making
" that has you do a short and very simple exercise to guess the rule that governs the sequence 2,4,6. This is on the Yale University web site. The answer is simple (it is a sequence) and the explanation centers on "To test a hypothesis, try to disprove it." I frankly found the discussion confusing, but it at least it was about testing hypotheses (potential courses of action) in order to choose the best. However, the point about testing each one until I found a counter example seems daunting for anything other than simple, toy problems like the one presented.
Labels: decision making
The rebuilding of the World Trade Center (WTC) is behind schedule and over budget. According to the executive director of the Port Authority, Christopher Ward "The schedule and cost estimates for the rebuilding effort that have been communicated to the public are not realistic," (from AP article
). I can only say "Duh!"
All decisions are based on estimates of the murky future. In most public works projects honest estimates would never get them funded. It is much easier to get something approved with optimistic time and cost estimates and then, once its started ask for more. However, even if you are not playing politics, making estimates is hard work that is not well recognized for its impact of organizations
Let's get this straight. Even if you are trying to be as accurate and honest as possible, making an estimate is a highly uncertain undertaking. And, most decisions are based on many, interdependent estimates. In a simple experiment, I asked hundreds of people to estimate the length of time needed to clean some dishes. I gave them a detailed list of the dishes, and a photo of them, the sink, and cleaning materials. I asked them to estimate how long to clean the dishes. Depending on how I worded the question, the average time estimated was anywhere from 17 to 32 minutes and standard deviation was as high 10 minutes. In other words, asking for an estimate for this simple, daily task isn't a whole lot better than using a random number generator set to give an estimate in the range of 10 -40 min.
If this simple estimate is so bad, think of what happens with complexity, or for tasks that have not been done before.
What is to be done then. The only solution I know is to use methods that account for uncertainty. For planning , the old PERT system made an effort at this (much improved method is Critical Chain
). For decision making, I have been pushing people for years to include uncertainty in their decisions making process.
In decision making, it is not only the estimates that are uncertain, it is the targets that you are trying to achieve that are uncertain. More on this in a later blog.
Back to the World Trade Center, I believe that if there had been a non-political effort at making decisions, and uncertainty had been included in the estimates, the plan would have been far less grandiose. However, the WTC should be grandiose, shouldn't it? This leads me to believe that honest estimates may not be the bast for all situations. I would hate to run or work for a business with that in mind.
Labels: decision making estimates, decision making process, uncertainty, world trade center