When is going on

February 25, 2015

When doing your CI research, and the follow-on analysis, you are trying to determine “What is/was going on?” That means assembling, physically or mentally, the data by topic or target, and then reviewing it to determine what it all means. And that is fine. In fact, there are a wide variety of analytical tools available for that (for books that can help, visit my page “What you should read to learn more about competitive intelligence”).

However, sometimes you have to approach it differently. In fact, I advocate always considering doing that. By this, I mean generating a sense of time as well as of topic. You can do that in at least three different ways:

  • Arrange your data by the date it happened. That is, for example, place data about the building of a factory at the time it was done, not at the time it was reported or otherwise disclosed. Now you can read backward and forward (I recommend doing it both ways) to see what developed, how quickly and (perhaps) why.
  • If you have a complex piece of analysis, consider constructing a separate time grid, noting highlights of your research (considering hyperlinking to source materials), even dividing it into parallel categories such as “executive changes”, “acquisitions and divestitures”, and “capital changes”. Now you can refer to it and see quickly connect a change in a building program to management changes as well as spot what is missing – for example, when was that second factory actually refurbished?
  • Arrange your data by the date that it first became public (or otherwise available). Here you can, metaphorically speaking, understand how the data came to be available. Now ask, why did some data come out later than did other data? It may also enable you to spot sources for missing data.

What you are doing is shifting from seeking to understand merely “what” is going on to “when” it was going on, by resorting and re-reviewing the data; you are also able to extract more insights and more quickly identify missing data points by looking at your data from several different perspectives.


Macro versus micro

February 18, 2015

I’ve been very critical of efforts to conduct true, long-range, strategic intelligence. I am not going to list all of my references here, but they’re easy to find. I’ve been thinking more about it and wondering why we do not seem to have much success in terms of applying CI techniques to generate truly long-range intelligence.

Let me touch on a couple of issues:

  1. Do we know whether high-level long-range strategic intelligence actually works or not? Does it produce actionable intelligence? A friend of mine once told me (I hope my memory is correct, and I’m not having a Brian Williams moment) that when his firm reviewed its long-range intelligence estimates, it was unable to tell whether or not the CI team was correct. Why? Because so much had changed in the several years that had passed; in other words, what might have been an accurate forecast on day 1 had no relationship to what was going on day 2000. It could not determine that the intelligence was either a hit or miss.
  2. Can other disciplines do this?
    1. Going to economics, we find that there is an intellectual disconnect between macro-level and micro-level economics. First, we had yet to come up with an economic theory that is demonstrably correct, and that fully links and integrates micro to macro or vice versa. Second, while micro-level tends to have a fairly high predictive value, macro-level theories seem to have real problems there. When I was studying economics, there was an ongoing joke that a particularly well-known econometric forecasting firm’s macro-economic model was so accurate that it successfully predicted “seven of the last five recessions”.
    2. We are always fascinated with the concept of using historical trends or psychology to predict the actions of large groups of people[1]. One of the most interesting expositions of both is in the “Foundation” series by the late Dr. Isaac Asimov. That series relies on the concept of “Psychohistory” – being able to chart and even control the direction of large masses of people over long periods of time by tweaking the actions of small groups of people or even of individuals. However, that concept, even though developed by a scientist, was in fact pure fiction.

My question then is, if other disciplines and thinkers cannot generate consistently reliable, demonstrably predictive analyses at the macro-level covering extended periods of time, why then should we think that we can do better in competitive intelligence?

Now, I am not saying that conducting a long-range/early warning/very strategic forecasting process is not without value. While it is not likely to be actionable, it certainly is educational – training executives to be able to react to (now) unforeseen circumstances. However, in this world, actionable gets budget, but educational get cut.

[1] See for example, http://cliodynamics.info/, Charles Mackay, Extraordinary Popular Delusions and the Madness of Crowds (Farrar, Stray and Giroux, NY 1932), Erich Fromm, Escape From Freedom  (Discus Books, New York, 1941), and almost anything on mob/mass psychology, the Nazis, or the Butterfly effect in Chaos theory.


Shaking it up

February 11, 2015

A recent column in Fortune, “5 ways to Shake up your Offices[1]” focused on the benefits of proximity – proximity in terms of “Do you have an elevator speech?”

One suggestion, based on the premise that “key executives” should talk to one another and to accomplish this they should “sit together”, was to have the CEO work directly with the marketing team. Interesting idea, but that would be applicable to almost everything. Aren’t human resources mission-critical? Is the IT infrastructure absolutely essential? What about competitive intelligence and corporate security, in terms of protecting key assets? In other words, the CEO can’t work directly with everyone who has a critical job. To be fair, the column does suggest at least at the CEO share a conference room adjoining the marketing team to make it easier. That’s a realistic suggestion.

Most interesting was the suggestion of putting together departments that tend to be isolated from each other, but which should be highly linked together. The examples suggested are (1) R&D with marketing and (2) sales with operations. The theory is that these groups should be spending time with each other and that for each to do its job well, it must/should work closely with the other. The column suggests that by bringing these groups together, it converts battling into idea generation.

The suggestion has real merit. And, I think for those of us in competitive intelligence, the idea of the competitive intelligence and market research functions sharing space, adjacency, conference rooms, or whatever is great. In many enterprises, these groups are not only working separately, they are often reporting to separate officers. If you can have a situation where competitive intelligence and market research individuals or team, while doing their own work with their own expertise and tools (think men are from Mars and women are from Venus or CI is qualitative and market research is quantitative), the chances of coordination rather than conflict or cooperation instead of ignoring each other improve greatly.

I don’t mean that market research should be doing competitive intelligence or vice versa. What I do mean is two units serving similar functions, external awareness, should work with each other to produce a better product. If being next to each other helps, try it.

[1] Verne Harnish, Feb. 1, 2015, p.28


Big data (part 2)

February 4, 2015[1]

A recent column in Forbes[2] kicked off a discussion with a friend of mine, Tim Powell[3]. We’ve both been looking at “big data”, and how it not only fails to contribute to, but may even inhibit competitive intelligence and other analyses.

Let me start with the column:

It points out that sports writers have been made more timid in their predictions because of the availability of increased data, sports’ big data. The reason given is that the sports writers are concerned about having their own opinions thrown back at them when the data is later analyzed and the writers’ prediction turns out to be wrong.

The article pointed out an analogous situation: where doctors might elect for a diagnosis and treatment that is more data driven, fearing future lawsuits, than one based on their experience and related highly trained instincts.

Rich Karlgaard, the column author, also pointed to the case of Starbucks in the early 1990s, which had hit a “slow patch”. Management there was looking to get more data to figure out what was wrong, but the number two at Starbucks actually went to the field talking (gasp!) to employees; he found the problem was one of attitude among new employees as well as among older employees. As the column concludes, “Trust your eyes and ears. The data are your tools not your master.”

That is all true, but the problems of dealing with big data can often be traced to a fundamental flaw that can be expressed this way:

We only deal with that which we can measure or already have measured (so Starbucks’ executives never would have talked to employees, but simply would have collected more irrelevant data). That in turn means, we measure only what we have or what we can get, instead of seeking to determine first what it is that we need – turning the analytical process on its head.

This I think is one of the major problems that we in competitive intelligence and others face in dealing with the world of big data. Not all data is quantitative or digital – some is qualitative or non-digital. But big data proponents, think the NSA, operate in a world where data is only what can be collected and stored on a computer and analyzed from there. So, in the case of the NSA, it focusses on collecting, storing and analyzing communications data – why – because it can[4].

The problem becomes that people collect data because they can collect it or because maybe, someday, perhaps, they might need it (ignoring the whole issue of half-life, which is a nice way saying that some data goes bad pretty quickly, as well as the problem of having too much noise in the data).

The right approach: determine what the question is before you then determine what data might be useful to help craft an answer. Today, big data seems too often driven the other way – determine what answer can be provided, and then attempt to drive the end users to produce a question that can be answered.

[1] Part 1 was posted July 24, 2013.

[2] Rich Karlgaard, “Data Wimps”, Forbes, Feb. 9, 2015.

[3] The Knowledge Agency.

[4] This practice is indirectly challenged by the conclusions in Erik Dahl, Intelligence and Surprise Attack: Failure and Success from Pearl Harbor to 9/11 and Beyond (Georgetown Univ. Press, 2013). His tabulation of 227 terrorist plots and cases finds that human intelligence is by far, approximately 50%, the reason for the failure of the plot or effort. Interestingly, signals intelligence fall behind both overseas intelligence and unrelated law enforcement efforts as a reason for failure, accounting for approximately 10% of the cases.