Super Forecasting – Super Analysis

December 18, 2015

A few weeks ago, I got Superforcasting[1] to review, I started reading it at once, given its provocative title. By the time I finished it, I had dog-eared over a dozen pages for re-reading, close to a personal record.

Let me give you the background. In a project  funded by the US intelligence community, specifically the Intelligence Advance Research Projects Activity (IARPA), a forecasting competition was set up among five teams. The focus was on the kinds of questions that intelligence agencies deal with every day[2]. In other words, analysis of the likelihood of certain events happening, based on research.

One of the teams was run by Professor Tetlock. Hundreds of questions about international affairs were asked of the teams over a four-year span, producing over 1 million responses to evaluate, a researcher’s dream. Superforecasting is an in-depth report of that multi-year study of political and economic forecasting, written in a clear, straight-forward, even amusing style.[3]

Putting aside its style, what the authors report is startling. Tetlock’s Team – the good Judgment Project or GJP – was composed of “ordinary volunteers”, several of whom are profiled in Superforecasting. Let me give you the official summary of the results of the competition:

“In Year One, GJP beat the official control group by 60%. In Year two, they beat the control group by 78%. After two years, GJP was doing so much better than its competitors that IARPA dropped the other teams.”

In other words, the amateur analysts not only beat the professionals, but got better at what they were doing year over year (thus eliminating the possibility of chance driving the first year results).[4]

How did they do that? The long answer is the core of Superforecasting; the short answer is that good forecasting, i.e., analysis, involves the following, among other steps:

  • Get your evidence (data) from a variety of sources;
  • Learn to think (and then think) probabilistically (math skills seem highly correlated with super forecasting);
  • “Unpack” the questions to be answered into smaller components;
  • Synthesize a wide variety of different views into a single vision;
  • Work in teams, providing that they are truly diverse;
  • Keep (honest) scores; and
  • Be willing to admit error and to change direction (quickly).

If that sounds simple, it is, but it is rarely done this way. “Superforecasters” actually follow these rules, instead of merely acknowledging that they exist, as is the case for so many analysts. Superforecasting gives good, solid tips on how to do all of this (and more).

For those working as CI analysts, whether doing your own analysis or doing it for others, this is a must read. For those to whom strategic and competitive intelligence is being provided, they should be read this – sooner than later – so that they can become better, and more critical intelligence end-users.

—–

[1] Philip E. Tetlock and Dan Gardner, Superforecasting: The Art and Science of Prediction, Crown Publishing, New York, 2015, $28.00, 341 pages.

[2]Anyone teaching intelligence analysis should immediately immerse him/herself in the methodology to see how to strengthen their teaching of analysis.

[3] For example, the Appendix with the “Ten Commandments for Aspiring Superforecasting” actually has eleven commandments. Is that a tip of the hat to Monty Python?

[4] This should destroy the common (mis)conception that better CI analysis is always provided by analysts that are more familiar with the industry or the company being serviced.


“The Perfect is the Enemy of the Good”

December 10, 2015

This quote from Voltaire should be a part of the lexicon of all those who provide or use CI. Why? Because a drive for perfection, whether self-imposed from the collector or analyst or from the end-user (or all of them), inevitably will doom legitimate CI activities. Let me give you a few examples:

  • A demand by the end-user or customer for more precision, for example determining if the competitor is experiencing “15.5%” growth rather than “over 10%”, forces the data collector or analyst to continue driving for an additional last bit of data, a long-shot interview, and/or one last review of the accumulated data. The extra expenditure of time and effort is unlikely to result in any improvement to the outcome. Why? Because the final decision by the end-user will probably not be influenced by the difference between an approximation and a more precise number. The “precision” only makes the end-user fell more secure, even when the additional level of “certainty” is often illusory.
  • A culture which demands that all competitive intelligence analyses have been correct will ultimately result in analyses which are useless. How can that be true? Experience in the government shows that such pressures result in reports which are derisively labeled as “Sherwin-Williams”, a slighting reference to the paint company’s whose slogan was “We cover the world” (and which is still reflected in its logo). The “Sherwin-Williams” report is one that predicts almost every eventuality, so the analyst is always right, at least in hindsight; but the assessment is not actionable, because it provides no real guidance.
  • A final, and even more important downside to the drive for perfection is the drive towards unethical or even illegal activity. As has been reiterated many times, 80% of all the information that you or your firm needs to collect on a competitor to develop actionable CI can be developed from open legal and ethical sources, that is “white” sources. An additional amount, variously estimated to be from 5% to 15%, could be developed from “gray” sources, that is sources which are accessed only through unethical activity. The final amount, ranging from 5% to 15%, can only be accessed by illegal behavior, or “black” activities. Pressure to collect more and more data inevitably drives the effort from white to gray or even to black sources. Ironically, the costs of collecting in the gray and eventually black areas increases astronomically when compared to legitimate collection activity in the white areas. [1] And with that comes increased likelihood of facing moral, civil and even criminal consequences.

[1] See, for example, Larry Kahaner, Competitive Intelligence: From Black Ops to Boardrooms – How Businesses Gather, Analyze and Use Information to Succeed in the Global Marketplace (New York: Simon & Schuster, 1996) 281.


How Important is CI?

December 3, 2015

Sometimes we have to justify CI to others. Why we have to do that is not at all clear. We do not have to justify cleaning the car’s windshield so we can see oncoming traffic. We do not need to have someone argue with us that having a smoke alarm in our home to warn about a fire that could kill us is a good idea.

But, if you have to  make a case for CI, let me give you some ammunition. The first is from a recent speech by Cliff Kalb, former head of CI for Merck, the pharma giant. The second is an insight from the gospel of “disruptive competition”. The third is a look at some past multi-company studies.

Here is a report of what Kalb told the International Association For Intelligence Education about the value of CI:

“Proctor & Gamble saved $40 million by applying the results of competitive benchmarking. NutraSweet did not spend $38 million because CI revealed its competitors posed no threat. General Motors cut $100 million from manufacturing costs from competitive benchmarking. Merck increased revenues by $300-$400 million as a result of CI team actions which outmaneuvered the competitor.”[1]

Let’s look at CI’s value from another perspective. Here is a recent analysis  of what Professor Clay Christensen’s “disruptive competition” involves:

“[I]ncumbent companies can fail despite being well run and serving their existing customers as assiduously as possible. Their success can blind them to the realisation that scrappy upstarts are quietly rewriting the rules.”[2]

What is a key concept here? Companies that fail due to disruptive competition do so because they are blind to upstarts, their competitors. Effective CI prevents that blindness.

To those can be added the results of several multi-company studies[3]:

  • In the early 1990s, a study of the packaged food, telecommunications and pharmaceutical industries reported that organizations that engaged in high levels of CI activity showed 37% higher levels of product quality, which is, in turn was associated with a 68% increase in business performance.
  • A PricewaterhouseCoopers’ study of “fast growth” CEOs reported that “virtually all fast-growth CEOs surveyed (84 percent) view competitor information as important to profit growth of their company.”
  • A McKinsey study published in 2008 asked executives how their firms responded either to a significant price change by a competitor or to a significant innovation by a competitor: “A majority of executives in both groups [across regions and industries] say their companies found out about the [significant] competitive move too late to respond before it hit the market.”

Enough?

————–

[1] William C. Spracher, “Competitive Intelligence Keynote Speaker: Cliff Kalb, Former Senior Director, Strategic Business Analysis, Merck & Co.”, IAFIE Newsletter, Vol. VI, Issue no. 2, p. 8.

[2] Schumpeter, “Disrupting Mr Distrupter”, The Economist, November 28, 2015, p. 63.

[3] For the sources for these, see John J. McGonagle and Carolyn M. Vella, Proactive Intelligence: the Successful Executive’s Guide to Intelligence, Springer, 2012 , p. 21.