Walk and chew gum at the same time?

June 16, 2017

A recent article[1] observes that “Microsoft is learning from Amazon.com…[basing] more of its decision-making on data-driven experiments and what it thinks customers want rather than what competitors might be doing.” Woof. Does this mean that Microsoft has NOT been basing some decisions on what customers want? Or does it mean that Amazon.com doesn’t use competitive intelligence (CI) in its decision-making? I doubt either is true, but this observation reflects a tribal attitude towards actionable information in many corporations.

Exactly what is the problem with basing corporate decisions on holistic intelligence dealing with the totality of the competitive and marketing environments? The default choice, alas, in some firms is evidently market research (MR), without any CI. Maybe the MR people do a little (what they call) CI, but usually they do not. If there is any CI process, it is likely reporting to the planning function, but not supporting sales and marketing as well. These silos hinder effective operation. That is like driving your car with clear side and rear windows, but with a shattered, opaque front windshield.

For example, say that MR including the “voice of the customers” research, discloses a need/desire of customers that they are also willing to pay for (an oft-ignored issue). Would it not help to know if CI disclosed that (a) one major competitor has previously rejected this opportunity (and why), (b) a second major competitor is ready to roll-out a new product/service to meet this need in the next 30 days, (c) a third competitor has done similar research and saw no such opportunity, and/or (d) another smaller competitor has the technology to enter this niche, but currently lacks the funding to do so? I think so.

Maybe Microsoft is right here. Why buy Safeway just because Amazon.com is buying Whole Foods?

[1] Matt Day, “Microsoft borrows from Amazon’s philosophy as its cloud grows”, The Seattle Times, June 7, 2017, http://www.seattletimes.com/business/microsoft/microsoft-borrows-from-amazons-philosophy-as-its-cloud-grows/

Lessons from the Election?

November 11, 2016

No, I am not going to rail (or boast) about the election results. What I think we should look at carefully is the evident widespread failure of polling and of the predictions based on the political polls. By one estimate, 90% of all political polls were wrong. And interestingly, that is not new, not even in the past 12 months. Think back to Brexit, as well as to both several of the Republican and Democrat primaries, where polls and related predictions also failed badly.

So? It should serve as a caution to all of those involved in CI. Let me explain:

  • The polls included assumptions that past turnout and voting performance were a predictor of future performance. Didn’t the pollsters ever run into the disclaimer that the securities industry always uses: “Past performance is no guarantee of future results”?
  • Some of the analysts, when looking at the polls, ignored an academic’s recent study that, and I think I have this right, that the usual margin of error of 3 points for each side of the polls was, when getting closer to election, at 6 points, or twice as large as claimed by the pollsters. That meant that these late polls were, essentially, becoming more worthless as the election came closer. Did they react to that? No, they continued to predict based on what models had worked in the past for them. A real “blind spot”, wouldn’t you say?
  • There is some indication that the polling may have been impacted by the blanket refusal of some groups – or subgroups – to participate in the polling, or if they participated, to “blow smoke” at the pollsters. What did the pollsters do? Evidently they projected the behavior of these absent or underrepresented groups by reference to the behavior of rest of the electorate. There was no discernible effort to figure out why these groups were under-represented, how big they were, and what that meant to these data holes. By doing that, they failed to account for what former Secretary Donald Rumsfeld famously called the “unknown unknowns”. They assumed that what they had learned from those who could be massaged to cover those who refused to be polled. “Mirror imaging” of a dangerous sort?

For those of us in CI analysis, remember that what worked yesterday may not work tomorrow, what was true yesterday may not be true tomorrow, and that predicting what people, individually or in groups, will do is, to say the least, fraught with peril.

Guest Blog: Finding information from and on foreign countries, regions and markets

January 29, 2016

Arthur Weiss, Managing Director of AWARE in the UK, has given me permission to post an excellent article he wrote in 2014 for Business Information Review titled “Searching in a global environment: Finding information from and on foreign countries, regions and markets”.

This study provides an overview of the kinds of data available online around the world. Arthur focuses on what is there, why it is there, and what that means for the accuracy and comparability data. I strongly recommend it for anyone facing CI research issues outside of his/her home country.

Arthur Weiss is the managing director of AWARE, specializing in marketing and competitive intelligence training, analysis, and research. He has written and lectured globally on a variety of marketing intelligence and information industry related topics. Arthur can be contact from his website http://www.marketing-intelligence.co.uk and via email at a.weiss@aware.co.uk.

Big Oil

This is a book review of Mark L. Robinson, Marketing Big Oil: Brand Lessons from the World’s Largest Companies, 2014. Palgrave Pilot, 153 pages.

Book review? Yes. Actually, I have done quite a few, mostly dealing with competitive intelligence and related subjects. And this one does deal with CI, among many other topics.

Some background: I have known Mark for many years, dating back to when he worked for (shudder) Big Oil, and then Deloitte. His knowledge of this industry is almost encyclopedic, and this book demonstrates that very well.

He focuses on marketing successes and failures (mostly failures) by Big Oil, and puts that in focus with a very readable and interesting history of Big Oil. Throughout the book, he notes where Big Oil has used both CI and industrial espionage to advance its operations. Interestingly, Mark observes that John D. Rockefeller, Sr. instituted the first business use of CI at Standard Oil by using the telegraph to send “actionable intelligence” to Standard Oil’s headquarters in NYC. (p. 23)

As far as I know, that makes the Standard Oil operations the earliest documented “CI” operation. Please post any earlier examples you know of.

More currently, he also briefly describes Mobil’s anticipation, before the energy shocks of the 1970s, of national and global energy shortages, as well as rising oil and gasoline price. How? Through its intelligence operations. (p. 91)

The book is very well researched, but also a great read, and a valuable tool not only for those in marketing, but also for those in crisis management, CI, and strategy. Get it.


Who is really doing competitive intelligence?

June 19, 2014

I recently ran a training session on competitive intelligence for non-competitive intelligence professionals, that is, people primarily involved in marketing, product development and the like.

Of interest was the fact that one of the attendees commented that at an earlier meeting, he was told that most business-to-business (B2B) firms didn’t engage in competitive intelligence. After discussing it with my partner, Carolyn Vella, I think I can understand what I think is a clear misconception.

The vast majority of large business to consumer (B2C) firms, our major retailers or consumer-products companies, conduct competitive intelligence through freestanding teams, internal competitive intelligence units. In the B2B world, many of the firms are not large, so immediately, we have to question parallels with the B2C firms. In fact, I would be willing to guess (not bet, as I do not bet) that there is a higher percentage of nonpublic and family-owned firms in the B2B market space than in the B2C market space.

That is a significant issue with respect to CI. In fact, I think that internal CI teams are more common with B2C firms than with B2B firms. That is not the same as saying that B2B firms do less CI.

Why? Smaller, particularly privately held, firms do not have the internal and external “churn” of employees, and even executives, that is common in the large B2C for market space. That “churn” carries with it the infiltration of new ideas and techniques. Thus, we can logically expect that the B2B market space may be less likely to be far along in developing internal competitive intelligence capabilities, than are firms in the B2B space.

There is another factor, one dealing not with existence, but with visibility. By this I mean that it is easy to identify a consumer goods firm with competitive intelligence when there someone in the firm carries the title “competitive intelligence manager”. However, in smaller B2B firms, and family-owned firms, the individuals doing CI do not carry such titles. They do CI as a part of everything else that they do, whether it is product development, marketing, research and development, or whatever.

Thus, they are truly the locus of the do-it-yourself CI revolution, which I think is spreading throughout the business community. While B2B firms may not be advanced in terms of creating freestanding units, they do conduct CI and I believe it will be more deeply embedded, simply because CI will become one of the necessary tools that every manager, in almost every department, will have to have and master. In the long run, I think this bodes particularly well for the B2B sector, and for CI.

So the presence of CI in the B2C space is more evident. But, being better (or more advanced) than CI in the B2B space, in the words of Eddie Wilson of Eddie and the Cruisers, “Hey! I didn’t say better, I said different. You oughta remember that.”

Who’s worst?

October 12, 2012

The last several days have seen people in politics raise questions about the validity, or more generously the accuracy and consistency, of recent federal statistics on the unemployment rate, the number of people filing first claims for unemployment compensation, and related data.  Now what will happen, almost certainly, is that this most recent monthly data will be “restated” next month or the month thereafter, a continuous process with flash macroeconomic data in the United States.

Does this mean you cannot rely on US government data?  Almost certainly, yes.  But the US is not unique.  The US is probably just the best of a bad lot.  Consider The Economist’s recent discussions of data in China:

“With China so engaged in the global economy, there is a never-ending stream of data, often unreliable, to feed the appetites of economic-research firms, investment banks, hedge funds, short-sellers, political risk advisors, think tanks, consultancies and financial and military newsletters – not to mention legions of academics, journalists, diplomats and spies.”  Banyan, The Leader Vanishes, September 15, 2012.

“[N]o other important country is as murky [in terms of providing accurate, credible data] as China.” Schumpeter, The summer Davos Blues, September 15, 2012

That China is murky, with respect to data both at the government and company levels, does not excuse the way in which United States collects and processes econometric data.  However, for politicians, businesses, and others, to make decisions based on the movement of 1/10 or 1/100 of some monthly measure from US government statistics is also foolish.

There may be many iron rules about data, but for competitive intelligence, I would propose the following:

Data from only one source is not data.  It is conjecture – until it can be confirmed.

Data based on telephone surveys should be increasingly subject to question.  Our brethren in marketing have already come to this conclusion, given the demographics of the populations that have shifted from landlines to cell phone only service, especially when cell phone users are notoriously difficult to survey.

The smaller the sample is and the more quickly the data is collected, the more likely it is to be inaccurate, and inaccurate in an unpredictable manner.

Combining data sets that are individually unreliable does not necessarily make the conclusion more reliable.  I realize that there are those in the statistics world that would disagree with this, but I do not believe that such aggregations always contain the necessary mutually offsetting mistakes to generate a reliable whole.

Any data that has to be restated should not be relied on at all in the first place, or at least not until it is eventually restated.

Using short-term data to determine the presence and direction of a long-term trend is not forecasting; it is at best guessing and at worst irresponsible.

What is truly ironic that all of this is that the US government releases such statistics, upon which so many rely with so little reason, while it would never allow a firm going public,  such as Facebook[1], to get away with using similarly dubious data.

[1] Linda Sandler, Brian Womack and Douglas MacMillan, “Facebook Fought SEC to Keep Mobile Risks Hidden Before IPO”, Bloomberg, Oct. 10, 2012)


CI and the Battle of Balaclava

August 17, 2012

Today’s blog topic was suggested to me by my partner.  Carolyn Vella, also known as my (significantly) better half.

She pointed out that one of the problems those of us in competitive intelligence face, that is sort of invisible, is the problem of the client who thinks he or she knows what they need because he or she thinks they know what they are looking for.

Let me give you an example: let’s say your client is General Motors, the car company, more specifically, an engineer at General Motors, has decided that the technology General Motors uses to build cars can just as easily be used to build, say, swimming pools, or cat carriers, or baby carriages.  So the client commissions a competitive intelligence firm to give it the lay of the land in the competitive world of swimming pools, cat carriers, or baby carriages.  For simplicity, let’s just say baby carriages.  Now, the client has done no preliminary work whatsoever to determine whether or not there is any demand for baby carriages made with the same technology as the Chevy Volt.  Rather, its engineers has assumed that, since their technology would allow them to build the product, necessarily there must be a demand for that product, so the only issue left is to determine what the competitive structure of the potential market looks like and then enter it.

What’s missing here?  The realization, recognition, and finally admission, that the client does not have any idea about the assignment which it is commissioning.  Now the most unfortunate part of all this is when that “lack” is very often not communicated to the CI professional.  When it is, it can easily be handled, as I note below. Rather, those commissioning the work come across as if they have already taken care of all the preliminaries, including at least some market research, and all that is needed is a good portrait of the competitive landscape, before launching the brand-new Chevy Volt baby carriages.

What they’ve done is they’ve assumed that there is a market and that all they have to do is to position themselves in that market with their product.  Now does that sound unlikely?  It should, but it is not.  This is why when you are doing any kind of competitive intelligence work whether it is for client, or for an associate and an internal team, you have to learn pushback.

What is pushback?  It is just what it sounds like.  Instead of taking the assignment and running with it, like the 600[1] who rode into the Valley of Death, you owe it to yourself, your client and/or your team, to ask a few simple questions.  The first is “What research, not just personal “knowledge”, or hunch, or speculation, or technical expertise, do they already have on this market space?  When the answer is little or none, pause.  Then ask them “What are you going to do once you have this report, that is, once you have the intelligence in your hands?”  If the answer is unclear, then you have to ask yourself, and them,”What is the point of going forward?”

Clarity at the beginning can help to produce actionable intelligence at the end.  A lack of clarity at the beginning almost certainly condemns the project to the graveyard of “nice to know” as opposed to “vital to the success of the next step”.

[1] “Was there a man dismay’d ?
Not tho’ the soldier knew
Some one had blunder’d:
Theirs not to make reply,
Theirs not to reason why,
Theirs but to do & die,
Into the valley of Death
Rode the six hundred.”

Tennyson, 1854