I just returned from facilitating a one day project review for a government agency. The project was a three year effort by a major aerospace contractor, it cost over $13M and was designed to develop a new technology. I knew nothing about the technology (still don’t), but I was hired by the agency to make the review process rational and useful for future projects. I will share the steps I used with you.Step 1: Develop project targets
. In order to have any usefulness, this must happen before the project begins. I use the term “targets” in the broadest sense, as will be discussed. In this case, the project was quite technical and specific targets for the measurable behavior of the resulting system were established about 3 years ago. I was not part of this process, but if I had been, we could have avoided some later review problems (explained below).
There are three important parts to this step.What Features?
What is important to measure should be addressed up front. Each feature measured should help answer the question “How will we know if the project is successful?” This means you need to understand what success means from the beginning in terms of functional performance, cost and time. Reflect on some recent projects or government actions (e.g. invasion of Iraq) and see if this question can be clearly answered. For the project I facilitated, there were seven clearly defined performance measures for the success.How measurable?
Not all features are measurable. In about 1893 Lord Kelvin said: “When you cannot measure it…your knowledge is of meager and unsatisfactory kind”. In 1956 Frank Knight (professor of four future Nobel Memorial Prize winners in economics) appended Lord Kelvin with: “Oh well, if you cannot measure, measure anyhow”. In the book Making Robust Decisions I spend much time discussing the differences between quantitative (Kelvinesque) and qualitative (Knightlike) criteria. For the project at hand, the targets were all quantitative with numerical targets were set. But, as we will see in a moment, they were not all realistic.What are both the target and the threshold?
It is always best to set two targets – an ideal target and a threshold. The ideal target is the level you would like to achieve if everything goes right, and the threshold is the level you might achieve if nothing goes right. The “target” must be realistic. If it is set beyond what has been previously achieved, then there had better be good evidence for this expected improvement. Setting two “targets” is a good idea whether the criterion is measurable or not. Even if a target is qualitative (Yes or No) then what “Yes” means and what “No” means should be defined ahead of time. For the facilitation job only a single “target” was set for each of seven measures. Some of these targets proved unrealistic, and could not be achieved in spite of the contractor’s best effort. If both a threshold and a target had been set, then at least the level of success could have been evaluatedStep 2: Develop Review Plan:
Project reviews can happen at any time in the project. The review I facilitated was a final review, all work was finished. Besides when planning to hold reviews, it is important to identify who is going to do the review. For this case, the reviewers were all Subject Matter Experts (SEMs) flown in from around the country. They represented potential customers for the technology. Thus, the review served not only as an opportunity to measure how well the contractor completed the work, but also to identify future uses for the technology. Finally, the question of where to hold the review is becoming a more important issue. With the increase in capability of virtual meetings and web conferencing, and with increasing travel costs, sometimes “virtual” meeting are best.Step 3: Measure How Well and How Certain:
A quandary facing the government agency that sponsored this project was how to evaluate the project. The problem was that some of the targets were not reached (and no thresholds were set). So, on the face of it, the contractor had failed. But, they had advanced the sate-of-the-art significantly; just some of the targets were not realistic. To accommodate for this, the sponsor decided to treat the evaluations as qualitative and we made use of Belief Maps to help with the assessment.
A Belief Map is a simple grid on which an evaluator puts a single dot for each criterion to reflect how well the effort met the criterion versus how-certain they are of their assessment. In this project, all such assessments were made on paper, independently by each individual SEM, the sheets were collected and the data input into the Accord™ software for analysis. A sample sheet for one of the criteria is shown. It was blank (no dots on the Belief Map) when handed out. Here it is shown with the dots as submitted by seven SEMs.
The use of Belief Maps can help in managing:
- Review material is often incomplete, and may present evolving and uncertain information
- Reviewers have differing levels of understanding about material
- Some reviewers are well prepared, others are not
- Some reviewers have dominant personalities
- Some reviewers leave early or conduct other business
Step 4 Analyze Results to Guide Future Work: In just looking at the completed Belief Maps, much can be learned. If the reviewer’s dots are widely distributed vertically then there is poor agreement caused by some or all of the following:
- The material presented to them is wanting
- Progress relative to the measure is insufficient to judge consistently
- The criteria is not defined clearly or measures multiple features
If many of the reviewer’s dots are to the left, then possibly:
- They are not sufficiently expert to judge the material
- The review material is not clear or the criterion poorly defined
For the project I facilitated, the reviewers were experts, the criteria clearly defined and the dots were well grouped vertically. Since they placed their dots on sheets independently, what we captured is a good indicator of the project’s quality. Horizontally, the dots ranged across the scale with some SEMs sufficiently expert to give confidence in the results.
Beyond what was done here, Accord can develop many useful statistics from the raw data. Further, it can fuse qualitative and quantitative evaluations and can collect data from a distributed team via the net.Step 5 Finalize review:
Finally, the material was documented for review by all concerned and use in future, similar projects. In this particular, case the final report was quite detailed due to the size of the investment and the use of a professional facilitator.Summary
Hopefully, these five steps give you some good ideas for project evaluation. The five steps can make reviews useful exercises rather than a hoop to jump through. The use two-part targets and of Belief Maps can give a much richer window on the progress.
If I left something out or you have experiences that can be used to refine these steps, please share them.
Labels: Belief Maps, Project evaluation, Project Review
Organizations use many different strategies to make decisions. The following is a list of strategies seen in practice. They all lead to decisions, but they all run the risk of not making good use of the available people and information. Have you ever experienced these?1. Decision by Running Out of Time:
This is the most common form of decision-making. There may be some effort to develop criteria and alternatives, but often time runs out before there is any effort to ensure that a robust decision is made.2. Decision by chaos
: The president of the company says, “I want our new product at the Atlanta trade show in two weeks”. The show is in two weeks. There is no rational way to prepare for the show and make robust decisions. The decisions made in the chaos may need revisiting after the show and work will need to be redone. Some people prefer to work in a chaotic environment, and when in positions of power, they will manufacture chaos as the working environment.3. Decision by Fiat (or Decision by Authority):
This is a very common style in autocratic organizations where a manager or someone else in authority decrees that a certain alternative is his/her favorite. It is often seen when the boss’s idea is chosen in order to preserve the relationship with him/her. This is more justification than decision-making.4. Decision by Coercion:
A champion for one alternative pressures his/her colleagues into submission. Often the loudest voice wins, the others having given up. One colleague referred to this style as “hijacking the process.5. Decision by Competition:
Here concern for who wins is most important, as instanced in most sports. This is often a win-lose situation and the relationship among individual team members is not important.6. Decision by Voting:
Democracy works, but does not often make the best possible choice. This decision making process is a weak form of compromise. Think how most products and businesses would operate if they were designed the same way we elect a president.7. Decision by Inertia
: This style is based on “We did it that way before” which may result in a robust decision, if the previous one was. But, not much progress or innovation is made using this style. Sometimes the tough decision is knowing when to innovate, and when to keep the cruise control on.
I have seen these all. Do you know of other dysfunctional decision making practices, ones that you have experienced?
What can you do to defuse these styles? The first step, like any good program is to realize that there is a problem and that put a name to it. Hopefully, this list can get you through that step. The second is to get general agreement that decision-making is a process. But, both of these steps may be hard as some managers don't want any rationality in the process. There are many ideas in my other blogs and on my web site. Do you have ways of managing these?
Labels: decision making, decision making process
Bad decisions don’t stick – the issue get revisited again and again, people who agreed with the decision do something else, and resources get wasted on a bad idea. Here are four surefire ways to make a bad decision.1. Make decisions too soon
Although books like the very popular Blink suggest that generally your first reaction is right, there is much proof to the contrary. Sure, if the house is burning and your first thought is to get out, that is a good thing. But, if your decision has many options, multiple factors to consider, or has the needed knowledge spread amongst multiple people, your knee-jerk decision is probably a poor one.
But, Paul Nutt in his book Why Decisions Fail, a study of over 400 business decisions found a leading cause of decision failure is “premature commitment”. One area where matching decisions to time is in Agile Software Programming which encourages “delaying commitments to the last responsible moment”. A great cartoon by Chris Matts that explains this methodology.
2. Ignore uncertainty
Any decision based on estimates about the future are uncertain and virtually all technical and business decisions are of this type. But, virtually all estimates suck. In an earlier blog post I told of the result of a simple experiment that showed people can’t even make simple estimates, much less real business estimates. There are no “accurate estimates” as an estimate is a distribution with a most likely value and uncertainty about that which is inherent in any real project.
If I ask you to estimate the time it will take to you to get to work, what will you tell me? You could say, “Oh about 15 minutes”, but is this the average time, the longest or the shortest. I wont know unless you give me more information. If I am planning on meeting you at work and I want to be 90% you will be there, then I need to know more about your commute than “15 minutes”.
3. Don’t itemize the important factors
It isn’t really clear how people naturally make decisions. One theory is that you use only the most important factor that can help you differentiate amongst the options. So, for example, if you want to buy a new car, and the most important factor is that it be red, then you choose the red car. If there is more than one red car available, then you use the next most important factor (e.g. cost or body style) and so on. This model is a little simplistic, but it does emphasize that you need to understand the factors to make a decision.
There are methods that can help a team, even a team with little agreement about what is important develop a set of factors that will help make a good decision .
4. Be blind to available information
A team might consider all the options and important factors to develop a robust decision. They write up their recommendation, send it to the ultimate “decider” and he ignores their results and makes a different decision. What a waste of time and money. One manager told me that he didn’t follow the recommendation because the team was not privy to all the information he had. This only compounds the sin.
Practice all four of these and you are just about guaranteed to make weak decisions. How will you know? Your decisions won’t stick - they will be revisited again, people will implement actions not chosen, or the results of the decision will be invisible next week.
If you have examples of these methods please share them.
If you know of other surefire ways to make bad decisions then please share these also, so they can be added to this list.
Labels: Blink, decision making, poor decisions, uncertainty
The new economy forces new practices in making critical business decisions. Cost overruns, late to market, and rescoping all lead to inefficiencies that can no longer be tolerated. One approach to improve performance in these areas is to ensure that your decision-making processes lead to the best choices in an efficient manner every time.
Symptoms of poor decision practice are:
- Decisions take too long – some are discussed, shelved, discussed again a year later, with no resolution
- Meetings end with no clear direction forward – decisions aren’t made and actions not taken
- Firefighting dominates useful work – with some fires clearly caused by poor earlier decisions
- Projects championed by the strong dominate what is best for the organization
- Decisions come unstuck – you decide what to do next, everyone agrees, and then something different happens
- Decisions are made without using all of available information and you know it
- Risk is ignored or padded over – all decisions are based on uncertain information and thus are risky.
It is estimated that half of all decisions fail. By “fail” I mean that some time after the “decision” was made, there is no evidence that any effort went into making it, i.e. nothing changed. The only evidence that any effort was put into the decision was the expenditure of time and money. A decision is a call to action and if no useful action occurs, the time and effort that went into making it was wasted, or worse.
The risk of making a poor decision, one that isn’t actionable and doesn’t stick, can be reduced through decision-management. Decision management is a process whose goal is to use the available uncertain, incomplete, conflicting, and evolving information to make the best possible choices with known expected risks, within time and resources available. Some of the basics of decision management are:
- Decision-management begins with assuring that everyone is addressing the same issue. This isn’t a given. I have been in many meetings where everyone thought they were addressing the same topic, but weren’t. Even if generally on topic, everyone is making different assumptions and these must be made explicit. In watching the debates on bailouts and stimulus packages in early February 2009, it was clear that the issue is not well understood or agreed to.
- To make a decision, there must be more than one alternative. If there is only one alternative and rejecting it is not an option, then the effort is for justification, not decision making. Decision-management helps develop alternative courses of action. The Wall Street bailout program of late 2008 seems to be an instance of rubber stamping a single alternative.
- A major component of decision-management is developing a clear picture of how you will know if you have a good decision - what should be measured – what is important and to whom is it important. Determining stakeholders’ values before diving in too deep is critical if you want a result that everyone can buy into. The lack of clarity with the stimulus packages being debated in Washington seems to hinge on a clear picture of what is to be measured (e.g. jobs created, home foreclosures saved, etc.). Good decision-management helps to develop measures, and at the same time, honors differences of opinion about which measures are important. This might be helpful in Washington.
- All decisions are based on estimates that are uncertain and yet we do a very poor job of taking this uncertainty into account. Past performance may be known (if it was documented), the present is obscured by its immediacy, and the future is just the best guess. In other words, very little is known with certainty. Adding uncertainty at the end of a decision making process with a “what-if” discussion is too late. Decision-management helps identify and capture the information uncertainty at the beginning of the process in an effort to produce a robust decision, one that is as immune as possible to the uncertainty.
- A key part of decision management is a process to fuse together all of the information is a timely and useful manner. Sometimes, there is need for a fast decision and sometimes, if the stakes are high and time is available, more effort needs to be placed on issue, alternative, measure, and estimate development. Good decision-management controls the time spent versus the depth of analysis to reach a decision.
In the 1990s most organizations began to review their processes to make them more efficient, agile and manageable. Making processes more efficient is mandatory in the new economy. However, many find that as they refine their processes, inefficiencies occur with poor decisions. In other words, efficient processes need efficient decision-management. Good decision-management is possible in most organizations and can alleviate many of the symptoms itemized at the beginning of this article.
Labels: decision making, decision making maturity, decision managment, new economy
I just spent a day judging a contest of pre-teens making design decisions. I learned a lot about decision-making. My day was in support of the US First Lego League (FLL) local competition. The theme this year’s event was “Climate Connections Challenge”. As part of the event the 10 -14 year-old students built robots out of Legos to compete in a simulated world. They also had to make a presentation on climate change and their community. Finally, they prove they could work as a team.
This year I was the chief Teamwork judge. To show teamwork each team (between 4 and 8 students) was given a simple task and 5 minutes to solve it. On entering the room, I welcomed them and had them gather around a table. I then put on the table a thin plywood disk and a small bag of large sized paper clips. I read them directions which were something like: “You are to build a structure that will hold the disk off the table. You can bend the paper clips in any form you like. The paper clips must be connected in some way. You can not use the plastic bag. You can not harm the plywood disk. You have 5 minutes. Go!”
Two other judges and I then observed them solving the problem. We were not interested in the solution, but in the team-work displayed during the solution. We had rubrics to guide our assessment and had time to ask questions at the end.
It dawned on me part way through the day that I had 21 samples of conceptual design decision-making by pre-teens to observe. This became clear to me too late to allow me to take formal data, but I did make the following anecdotal observations. In each, I juxtapose them to what I have observed in adult decision makers.
- As soon as I stopped reading, the kids grabbed the clips and started bending and talking at the same time. One team was so eager I feared for my safety as I jumped out of the way (slight exaggeration). There was no planning or reflection by any of the teams! I did an experiment in the late 1980s where I gave 6 professional engineers a design problem and five hours to work it. I video taped the sessions. Most of these mature engineers read the requirement over several times before beginning to generate ideas. One subject however dove right in like these pre-teens. He pursued his first and only idea and proceeded to patch on it (see next item), never leading to a reasonable solution of the problem. My conclusion from these experiences is that it takes maturity to do the up front work necessary to make decisions.
- Once the kids jumped in, they patched their way to a solution. Patching means to try different ideas until you stumble onto a solution (I discuss this further in The Mechanical Design Process). What is bad about patching is that it is usually random with no structured method to explore the design space. I saw the same with the engineer described above and other immature designers.
- When they asked clarification questions we only responded with “Read the problem description”. Few did (less than 1/3). They took our response to mean “No”. to whatever was asked. I was surprised at this and after the first few times, I made a big deal of laying down the problem description (in big font on green paper) in the same location as the wooden disk and paper clips. Only one team (out of 21) reread the description together to clarify their issue after we prompted them. My generalization of this is that, for the most part people don’t use all the information available to them.
- On the whole most of the teams did a good job of including most of the members on their team. This is a credit to FLL and the mentors who worked with the teams. I have had many college level teams that were not as inclusive.
By the way, there were three classes of solutions, 1) bend the clips so they sat on the table and the disk was supported by upward extending wire legs (most did this with great patching), 2) bend the clips so that they fastened onto the disk with downward extending legs (only saw this done once successfully), and 3) lay two clips flat on the table and set the disk on top. The last was the most elegant and simple. One team discovered this 40 sec into the five minutes. To see more teamwork, I quickly said “Great, now make the support as tall as possible” (whew!). Most teams never found this solution, but with their dive right in approach and random patching, I am not surprised.
Labels: decision making maturity, decision making process, team decision making
I was trained as a mechanical engineer. I know how to model systems, take data and develop an understanding for physical things. Most (nearly all) technical university courses are about how to analyze things. In my case these were physical things, but my education could have been in any engineering discipline, in business or in the sciences, and the courses still would have been primarily “thing” focused.
In the 1980s I was teaching mechanical engineering design at a university and began to appreciate that things come into being through a process. My thinking moved from being thing focused to being process focused. Process thinking was not new to some fields. In fact, in engineering there are control processes, chemical processes, fluid thermal processes etc. But these are nothing but “process” things. What I became interested in was the process of developing these things.
In 1990 I began work on a text book for mechanical engineers so that they could study the process of how things mature from need to a final, working object. I agonized over the title as the term “process” was not really a part of the engineering lexicon except for process things. I finally chose the title “The Mechanical Design Process
” and the book was published in 1992. This title turned out to be a good choice and the 4th edition of the book is out in January 2009.
When I began to research the design process in about 1984, my goal was to understand how objects evolve with eye toward developing methods and tools to better support this evolution. I don’t mean CAD and solid modeling tools as these are representation tools for what is being designed, not tools that actually support the design process. In fact, I have argued in a paper I wrote in 1990 that CAD can be detrimental to the design process ( Ullman and Wood, 1990). Since writing this paper, CAD has evolved into solid modeling which is much better, but the arguments in the paper still hold.
Anyway, I was especially interested in developing a tool that could record the evolution and rationale for product evolution. This design rationale system would be able to capture how a product came into being and could be reused, queried and vetted to form a permanent record of its birth, life and death. When I began, I focused on the evolution of the assemblies, parts, and features. I quickly realized that this wouldn’t work because these things evolve during the design process and thus, focusing on them missed all the birthing – all the interesting, creative engineering.
My thinking turned to capturing the process through which things evolved. Process thinking encapsulates thing thinking as the process is about the evolving things. This shift in my thinking coincided with the text book mentioned above. However, it became evident by the mid 90s that process thinking, although much better than thing thinking, was not the best way to develop a design rationale system. What became evident was that decisions are the punctuation marks in the process and that my approach had to make yet another shift, one to decision thinking.
Thus, by the late 1990s my thinking had matured from thing, to process, to the decisions made during the process to develop things. Specifically, I wanted to understand and support the decision-making process. My research showed that on macro level, decisions were made at gates (in a stage-gate process) a countably few times during the evolution of a product. On the micro level – the cognitive level - they occur about 1 decision per minute (see Stauffer,and Ullman 1991). Somewhere between these extremes there is much need for decision making support.
Before we go on, a definition of decision thinking - decision thinking is focusing on the decision-making process used during technical or business development. “Focusing” implies understanding and supporting individual decision makers and teams of people making decisions when information is incomplete, evolving and conflicting so that the decisions are robust.
I am not alone in this evolution in thinking. First, when I chose the tile “The Mechanical Design Process
”, the word “process” was problematic as it was not commonly used in industry. Now, the product development industry makes good use of process thinking. Then, when I started talking about decision thinking in the late 1990s few in industry knew what I was talking about. Now there is much evidence that companies and the government are beginning to realize the importance of decision-making in their processes. My evidence for this is not firm, but many of my contacts seem to understand what I am talking about and few did five years ago, and the number of hits on the Robust Decisions web site continues to climb. Their thinking is maturing through the process to the decisions made during the process.
Second, the CAD industry matured into PLM (Product Lifecycle Management) during the 1990s. Where CAD and solid modeling is about things, PLM is about processes that manage things and document things. I have tried to interest the PLM vendors in decision thinking for about eight years. Initially they told me there was no customer pull (see the previous paragraph). Recently, I have gotten their attention. Now that they have the process under control, they too are maturing toward decision thinking.Ullman, D.G., S. Wood, D. Craig, "The Importance of Drawing in the Mechanical Design Process," Computers and Graphics, Special Issue on Features and Geometric Reasoning, Vol. 14, No. 2, 1990, pp. 263-274.Stauffer, L.A., D.G. Ullman, "Fundamental Processes of Mechanical Designers Based on Empirical Data," Journal of Engineering Design, Vol. 2, No. 2, 1991, pp. 113-126.
Labels: decision making process, decision thinking
The term “risk management” has been around for a long time in financial, technical and medical practice. It is a term that is very loosely used and I want to dive in with a decision-centric view just to further muddy the waters.
A good place to begin is with a formal definition of “risk”. If you enter “risk definition” into Google you will get over twenty-five definitions; some are redundant, and there is little consistency. Regardless of the definition, risk traditionally amounts to answering three questions:
- What can go wrong? Some event.
- How likely is it to happen? Probability that the event happens based past statistics, an analytical model of the event, or best guess based on experience.
- What are the consequences? Money, time, and possibly even lives are wasted.
The above set of questions describes event risk. They are the basis of technical risk evaluation. However, most managers are concerned with decision risk rather than event risk. For decision risk, the questions managers really want answered are:
- What can go wrong if I choose alternative X?
- How likely is it?
- What is the impact?
(Brian Seitz of Microsoft articulated these as cleanly as shown here)
Note that these are the same three questions as with event risk, just slightly tweaked. Regardless of whether focused on an event or a decision, risk management, by definition needs to be effort directed at some or all of these questions.
The first question raises the issue of how we know our choice was poor. In the book Why Decisions Fail, the definition of a poor decision is one that had no positive impact after two years. There are other measures of a failed decision. For example, our time is up and no satisfactory alternative has been developed. Or not everyone on the team agrees with the choice made and some team members feel disenfranchised. More often, the result of a poor choice is not known until much later. For example, if we choose a bad restaurant, we will not know until we have eaten there. Or if an engineer makes a poor decision on what material to use for a product, this may not be evident until the customers have used the product for a number of years.
One thing is consistent in this discussion: Without uncertainty there is no risk. A corollary is that the more uncertainty, the higher the risk of making a poor decision. “What can go wrong?” is that one or more of the criteria are not satisfied. “How likely is it?” is directly dependent on our certainty during the alternative evaluation. We may know from past experience or data that the probability of something failing is XX%. But, this probability may be compounded by other uncertainties such as lack of knowledge, disagreement amongst team members or incomplete data. And “What is the impact?” is that the alternative chosen no longer is as good as it was originally thought.
In order to manage risk during decision making:
- You must address risk during decision making, not as a task to complete after you have selected an alternative. This is because decision risk is a measure of your lack of knowledge, as well as other uncertainties you need to consider when you make a decision.
- There is risk associated with every feature of every alternative. Traditional risk assessment separately addresses financial risk, performance risk, and schedule risk. But when including risk as a part of the decision-making process, you must integrate the uncertainty inherent in all the features at once, because it is the combination of them that drives your decision.
- You can get a good assessment of uncertainty, and thus risk, by fusing the evaluations of all the members of your team.
Labels: decision making, decision risk, risk management