Look Back?

November 16, 2016

One of the common, and key, mantras of CI is that it is forward-looking. You have almost certainly been told, at least once, that CI is not a rear-view mirror, looking at what is behind you and your firm. Rather it is something that should be used to anticipate what is coming and provide support for decisions to deal with coming events and trends.

Yes, that is true, but that is not always the case. Sorry.

A character in a novel I recently read, The Power Broker (Stephen Frey), made this point rather well.

“Everything happened for a reason, and it was always best to know what that reason was. Having information, knowing why something happened – whether it was good or bad for you – was the key to success.” (p. 110)

Keeping this in mind, we should use CI during our annual reviews, which should accompany our strategic planning activities. You are doing that kind of review, right? When looking back on the previous year (or even previous quarter), we should be checking off the successes we had and the failures we suffered. But, don’t just assume that you succeeded because of a great plan and failed because, well, stuff happened. Life is not like that.

We need to do more – use CI to find out why things went poorly as well as why they went well. Knowing that is critical to developing and executing effective plans going forward.


Lessons from the Election?

November 11, 2016

No, I am not going to rail (or boast) about the election results. What I think we should look at carefully is the evident widespread failure of polling and of the predictions based on the political polls. By one estimate, 90% of all political polls were wrong. And interestingly, that is not new, not even in the past 12 months. Think back to Brexit, as well as to both several of the Republican and Democrat primaries, where polls and related predictions also failed badly.

So? It should serve as a caution to all of those involved in CI. Let me explain:

  • The polls included assumptions that past turnout and voting performance were a predictor of future performance. Didn’t the pollsters ever run into the disclaimer that the securities industry always uses: “Past performance is no guarantee of future results”?
  • Some of the analysts, when looking at the polls, ignored an academic’s recent study that, and I think I have this right, that the usual margin of error of 3 points for each side of the polls was, when getting closer to election, at 6 points, or twice as large as claimed by the pollsters. That meant that these late polls were, essentially, becoming more worthless as the election came closer. Did they react to that? No, they continued to predict based on what models had worked in the past for them. A real “blind spot”, wouldn’t you say?
  • There is some indication that the polling may have been impacted by the blanket refusal of some groups – or subgroups – to participate in the polling, or if they participated, to “blow smoke” at the pollsters. What did the pollsters do? Evidently they projected the behavior of these absent or underrepresented groups by reference to the behavior of rest of the electorate. There was no discernible effort to figure out why these groups were under-represented, how big they were, and what that meant to these data holes. By doing that, they failed to account for what former Secretary Donald Rumsfeld famously called the “unknown unknowns”. They assumed that what they had learned from those who could be massaged to cover those who refused to be polled. “Mirror imaging” of a dangerous sort?

For those of us in CI analysis, remember that what worked yesterday may not work tomorrow, what was true yesterday may not be true tomorrow, and that predicting what people, individually or in groups, will do is, to say the least, fraught with peril.


Connecting Dots

November 2, 2016

Over the last two weeks, I first reviewed AFIO’s Guide to the Study of Intelligence and then dealt with a couple of important lessons from that book. In this blog, I want to point out yet another one. This one deals with analysis[1] – always a subject of interest.

Carl Ford, a retired intelligence officer, provides a pointed comment about analysis:

“Intelligence collection’s scatter-shot nature also makes it easy for analysts to fall into the ‘connecting the dots’ fallacy. Just because one has a dot does not mean it is, or can be, connected to other dots.”

When collecting your data, particularly if you are a DIYer, you must refrain, not an easy task, from jumping to conclusions about what that new dot means. It may mean nothing – it may mean something. But if you immediately categorize it as one or the other, you are engaging in weak – at best – analysis. In such cases, your own blind spots, biases, group think, etc. can quickly take over, causing your already weak analysis to become flawed analysis.

Not only should analysts keep this in mind, but their clients should be educated on this as well. This is particularly critical for those CI clients who demand regular reporting if “what you have found so far”. They, just as the CI analyst, can often be swayed by what dot is first found, even when that dot is later found to lack relevance or even credibility.

[1] Carl Ford, “My Perspective on Intelligence Support of Foreign Policy”, pp. 159, 160.