Success is Fleeting

June 8, 2015

“An actor’s popularity is fleeting. His success has the life expectancy of a small boy who is about to look into a gas tank with a lighted match.” Fred Allen, American comedian.

One of the problems in managing competitive intelligence is not measuring results[1], but rather in having its customers/end-users acknowledge its success and benefits in individual cases and overall. There are several reasons for this:

  1. The “fundamental disconnect”. Except in the case of DIYers, who develop the CI that they need for their own decisions, those providing CI cannot force their customers/end-users to use (or even to pay attention to) the CI. And if they do use it, it is often seen (or at least claimed to be seen) by those customers/end-users as only one of many inputs, confirming prior beliefs, or “obvious”.

“Good advice is always certain to be ignored, but that’s no reason not to give it.” Agatha Christie.

  1. Failures of CI are like any other failures – blame is hard to assign and harder to accept.

“Victory has a thousand fathers, but defeat is an orphan.” John F. Kennedy, News conference, April 21 1961.

  1. Successes in CI not only have “a thousand fathers”, but are too soon forgotten. I know of more than one firm where its CI operation provided demonstrable multi-million dollar savings, benefits, or new business captures within a very short time, but which then suffered from severe cuts in the very next budget cycle.

“Dangers which are warded off by effective precautions and foresight are never even remembered.” Winston S. Churchill, The World Crisis – Volume I, 1923, 1951, p.  432.

[1] For help on doing that, see John J. McGonagle and Carolyn M. Vella, Bottom Line Competitive Intelligence, Praeger, 2002.


How do you measure success?

August 6, 2013

As you do your own competitive intelligence or utilize CI provided by others, you will eventually run into the question, “So what’s the bottom line here? How much is the CI worth?”

      Commercial aside: I could just refer you to our book, Bottom Line Competitive Intelligence, but I want to talk about it from your point of view, not mine.

There are two ways to look at this problem. The first is to try to measure things in terms of dollars, which is to answer the question “What is the dollar return from the CI?” The second is to look at what was actually done with the CI.

How do we measure the monetary value of competitive intelligence? There really two sides to that. One is to determine how much more money you made that you would not have made without it or, conversely, how much less money you avoided losing that you would’ve lost without it. The problem is that most decisions have many inputs, so crediting one input with a decision’s entire success or failure is just not right. So you try to assign a percentage to it.

Say you were 60% sure your decision would succeed, whatever that decision is.  Your decision, if right, could produce $1 million in new profits. If you could increase the likelihood of success by 10% by using CI, you could assign a value of $100,000 to the CI. The same kind of calculation works in terms of reducing the costs of failure, although most people prefer not to make those calculations. Why? Because they do not like to be “negative”.

The alternative is to see if and how the CI was used and then determine whether or not the decision-making process was improved. Simply put, if the decision-making process was not improved, or if the CI was ignored, then the CI was worth nothing. If it was improved, then the CI was valuable.

For people who believe that you must measure everything, consider the recent observation by the head of Yahoo: “Just because we have a ruler doesn’t mean we have to measure everything[1].”


[1] Brad Stone, “Can Marissa Mayer Save Yahoo?”, Bloomberg BusinessWeek, August 5-11, 2013.


Early warning (Part 1)

April 12, 2013

 

Early warning systems have been one of the more talked about aspects of strategic planning and competitive intelligence, but are at the same time, one of the most misunderstood and difficult to execute over the long term.

Why is that?

One reason is that those charged with providing early warning are given an almost impossible task. That is, they are to be on the lookout for unknown threats from unknown directions and unknown sources, and then to identify them sufficiently early that the company can take action to avoid them, exploit them, or mitigate their impact. Now think about that charge. It resonates of former Secretary of Defense Donald Rumsfeld’s “unknown unknowns”.

Such a task is one which is almost designed to fail. Individuals or teams providing early warning have little to show on a regular basis, simply because events and trends justifying such warnings are few and far between. If you think that measuring the impact of competitive intelligence is difficult, a subject about which Carolyn I have written[1], just imagine how difficult it is to measure the impact of and early warning system. It is often, therefore, one that is operated more on faith then other business systems, and for that reason is more likely to be eliminated during times of economic difficulty.

This is not to say that early warning should not be undertaken or that it cannot succeed. However, it is important to start with a very careful definition of what an early warning process is expected to be and accomplish, as well as to provide a precise methodology to assure that it will be operated properly, and to provide a regular channel of feedback to assure its customers that it is in fact of continuing value.

These are some of the issues that I will cover in future blogs on this important topic.


[1]Bottom Line Competitive Intelligence, Quorum Books, 2002.