SANDY!

October 29, 2012
With Sandy bearing down on us, we anticipating power loses – duration unknown. For that reason, there will be no additional posts this week. If you are in the path of Sandy, good luck.


How to write a book review

October 27, 2012.

Over the years, I have reviewed well over 100 books.  The reasons I do this are several: I enjoy reading, I enjoy writing (I have written over 20 books), and I’m always trying to learn more about what I do and what I’m interested in.

People have asked me how to write a book review.  Let me give you a look at how I do it.  I know that some other people do not do it this way.  Particularly, I have in mind those “book reviews” which usually serve as a vehicle for someone to make the point that they are somehow smarter, more experienced, or more learned than the author whose work they are reviewing. You know the ones – where the review seems positioned to obtain pats on the back for the reviewer, and not to aid in understanding the author’s book.

First, appreciate the fact that writing a book is hard.  It is very hard work and can take quite a long time.  A number of years ago, my partner, Carolyn Vella, was in charge of the annual meeting of a group of professional and amateur writers.  One of the things she did was design a gift with the following warning for all writers on it:

“Writing is not hard.  All you have to do is stare at the computer until beads of blood form on your forehead.”

As a reviewer, your job is not to rewrite the book, but to offer a well thought out opinion to individuals who might consider buying the book, or using it in their business or in teaching.

I will put another way: your job is not to inflict pain; it is to look at the value of the work.  If you find, and this is occasionally the case, that the work lacks any real value, then don’t review it.

Second, understand why the author wrote the book.  Very often this is found in an introduction or preface or perhaps in some materials from the publisher.  What you’re trying to do in your review is to explain to the target audience, and potentially other audiences, what you see in the book.  So if the book is aimed at a research chemist with a PhD, you certainly cannot fault it for being somewhat dense.  On the other hand, if it is aimed at junior high school students, a comment about its lack of clarity might be absolutely appropriate.

Third, actually read the entire book.  I do not mean that you should not dip into the book as well.  For example, let’s say I have a book on management strategy. Because I’m consider reviewing it for an audience of competitive intelligence practitioners, I may well check the index to see what is indexed under the terms intelligence, business intelligence competitive intelligence competitor intelligence, etc.  If I find nothing there, I may well pass on reading and reviewing the book.  On the other hand, if I’m reviewing it for the Association for Strategic Planning, I’m not going to bother with that; I will dive right into the book.

Fourth, rely on what you read in the book, not elsewhere.  Some publishers put out supplemental materials that they distribute to potential reviewers.  These materials may include a half page synopsis of the book.  That can be a useful to understand the targeted audience the book is aimed at as well as its overall purpose.  However, I have seen reviews that basically crib that synopsis. That is wrong.

Similarly, some publishers will send out a question-and-answer session with the author.  I’ve also seen this occasionally put in a review where gives the impression that the reviewer actually sat down with the author and had him/her answer specific questions.  Your job as a reviewer is to review the book.  Do not cut and paste marketing materials as part of that process.

Fifth, keep track of what is attractive or important to you.  My habit is to dog ear each page that has something interesting or important on it.  This may be a new insight, a different way of defining the problem, or the like.  At the end, I have two bases for an overall impression of the book: one is the reading I did; the second is the number of pages that I have found marked as being useful to go back to.  If it an electronic book, just bookmark it.

Sixth, in the review, provide all the critical details about the book, that is, the full title, the full name of the author or authors, the publisher, the number of pages, the price, and an address or website from which a reader can order or where it can be purchased at retail.  With self-published books, e-book editions and the like, it is no longer matter of going to your local bookstore, if it still exists, and just asking for a copy of the book.

Seventh, I usually start my review by saying what the author is trying to accomplish, and to whom the book is aimed.

Eighth, be specific about why the book is useful or important.  Feel free to quote, but be very selective on your quotations.  Your job is to give the reader of the review a taste of the book, not to give away something new or different that the author has struggled hard to find or develop.  You do not want to write a review where, if it were for a movie, you would have to insert “spoiler alert”.

Ninth, tell the readers for what group or subgroup it is, in your estimation, a good acquisition.  For example, you may find the book is a very elementary statement of its subject matter.  Instead of saying that it is too simple, if appropriate, why not suggest it’s a good starting point for a beginner.  (Remember Rule number One).

Tenth, once the review is published, always send a copy of the review to both the publisher and the author.  Make sure you include complete information on where it was published or posted. Many reviewers send review to the publisher only; some do not even do that. While it is the publisher’s responsibility to get these reviews to the author, frankly that takes time and is not a high priority task.  Speaking personally, the author will appreciate getting a copy of the review from you directly.


How to write up your analysis (Part 1)

October 23, 2012

Recently, the members of the International Association for Intelligence Education, individuals involved with teaching military, diplomatic, police or business intelligence, have been discussing communications, and particularly writing.  Their general feeling is that most people who write up the results of an intelligence analysis assignment do not do it very well.  Of course contributing to this could be the fact that, in the eyes of some of these people, few people write anything well.

However, let’s focus on writing up the results of your intelligence research.  This is a complicated topic.  In future posts, I will give you some hints about how to organize your research to help you when you move into the writing stage (Commercial message – there are a lot of these in our book, Proactive Intelligence: The Successful Executive’s Guide to Intelligence).  But here, let me get across a couple of points that impact your writing and can help you improve it rather quickly.

One, in spite of the existence of voice recognition packages, writing is different from speaking.  Those differences may be diminishing, but they still exist simply because in writing your audience cannot hear your inflection, your emphasis, and the concepts you stress. So, spend enough time on your writing – first drafts should be just that – drafts which are improved by reviewing and rewriting them.

Two, remember that anything that you write down will have to be read in the future by someone who cannot talk to you.  If you are writing it for your own records, remember that when you read it again seven days, two months, or six months from now, you will not have at hand all the research that you did, nor will you have retained all of the analytical nuances and insights that came out of analyzing that research.  So make the document complete.  In other words, start by saying what it is you were researching, that is the question or questions you were trying to answer.  Then, answer each of them in turn, including the supporting research at that point.

Three, analysis is not the same as data.  When writing up a report or file memo, consider keeping the two items separate.  One way is to simply label your analysis as “analysis”, “conclusion”, “discussion”, or the like.  And separate the supporting data from that heading.  By doing this, you communicate to the reader where the data ends and your analysis begins.

Four, keep it simple.  A report is not an exercise designed to show how smart you are or how well you have mastered the English language or some scientific subset of it.  You are trying to conclude your research with a clear message.  Simple means be direct, not indirect. For example, in general, things do not happen.  Events or people or something else caused them to happen.  Write your sentences that way.

Five, if you do not know something, or you could not answer a question, say so.  It is very deceptive to make it appear that your analysis or your memo is somehow a complete coverage of the topic when you know, and we all know, that the odds of it being complete are remote. For example, if you do not know the cause of some event (see Point Four above), say that.

Six, keep it short.  From time to time there may be reasons for you to detail or record all that you did not find, resources you could not utilize, or other such omissions.  However, that is the exception rather than the rule.  In addition to keeping the report short, keep your sentences short.  As a general rule, if you cannot read a sentence back aloud without taking a breath, it is too long.  Cut it into two or even three shorter sentences.

This discussion will be continued from time to time.  If you have any questions or suggestions, please just let me know


Thoughts on the SIR meeting

October 19, 2012

The recent annual meeting of the Society of Insurance Research (SIR) focused on “Big Data”.  It was interesting to see exactly what that meant to the attendees and to competitive intelligence.

Many attendees came with the belief that Big Data dealt only with issues like cloud computing.  But they left with a realization that they have to deal with issues including being flooded with more and more data, having  to deal with more and more unstructured data, and curiously, learning  to throw away data, early and often.

So what does this have to do with competitive intelligence?  Actually CI is involved in this in several ways.

First, Big Data means that more people in business, government, and the non-profit world are going to have to learn to deal with more kinds of data in more contexts.  That, in turn, means that they are going to have to learn at least a little bit about CI.  They need to understand enough about it to do a little of it, and to understand when they should get help, or to transfer an interesting bit of data to someone else when he/she can use it.

Second, it means that, for those doing CI, there are more people in more places within your enterprise that you can approach for least a little bit of help doing your work.  Now, if the push to get rid of “noise” in the data continues, you may be limited in time as to when people still have useful data, but it should change the way you look at developing and maintaining your own internal network.

Third, it means that people within your company will be gathering more different types of data from more different types of sources than in the past.  For those involved with CI, full-time or part-time, this means you have to rethink what kind of data you can possibly use, and how you can use it.  For example, one presentation dealt with the coming use of visual recognition technology coupled with real-time advertising feedback.  The example given was you watching the World Series, and following a home run on your television with your eyes.  Once it is clear that it is a home run and where it will land, the television feedback would insert a virtual advertising billboard behind that location. That means that as your eyes follow the home run downward, you would be reading the new advertisement. While the technology behind this and other scenes somewhat awesome, and necessarily antagonistic to privacy, the fact is that if it can gather one kind of data, from a CI perspective, it may be able to gather others.

So, while Big Data may initially seem to be a subject of interest only for those who toil in the vineyard of quantitative data, for those of us over on the qualitative side, it represents a new world.  Whether or not that is a brave new world is as yet unclear.


Where Did Competitive Intelligence Come From (Part 2)?

October 15, 2012.

An additional force contributing to the development of competitive intelligence was the concept of environmental scanning.  Environmental scanning was aimed at having business decision-makers review their entire operating environment: political, economic, cultural and social, as well as competitive.  It was most often operated as an adjunct to strategic planning, providing some guidance to planners on the environment in they were working during the development and, more rarely, in the implementation of the strategic plan.

For those who are historically minded, you can see some of this in Herb Meyer’s important book, Real-World intelligence—New Edition. Grove Weidenfeld, New York (1991), where Herb uses the analogy of a pilot and radar:

“Like radar, a Business Intelligence System does not tell the executive—the pilot as it were—what to do. It merely illuminates what is going on out there on the assumption that with good information a competent executive will nearly always respond appropriately.”

Back to environmental scanning. Since decision-makers are in the business of making decisions, and not of constantly watching, environmental scanning following the development of a plan quickly became something which was handed off to others to do. It was a sort of primitive early warning system to decision-makers of trends, facts, or events that could adversely or beneficially impact a business.

That extent, it was both broader and shallower than much of competitive intelligence as we know it today. By that, I mean it was concerned with issues and targets beyond competitors, including suppliers, government regulation, the business climate, environment and weather, natural materials and natural disasters, and even war.  Thus, those involved in environmental assessment were the eyes of the enterprise.  They looked ahead to see what in their business environment was changing and then made a quick assessment on how that change would affect them.

For example, take healthcare services. While competition between group healthcare insurance companies certainly is a factor in the market space, other factors influencing how companies and their competitors do include state and federal regulatory trends, the overall state and future of the national economy, the growth/ lack of growth of large employers, and the growth/stagnation in the number of employees working for small businesses, the age demographics of the working population, and advances in medical care.

For reasons which I really don’t understand, environmental scanning seemed to fall off the horizon (sorry). Perhaps it is that no one person or even one unit could conceivably provide detailed, ongoing, smart and reliable evaluations of all of these factors.  Or perhaps “bottom line” oriented business people could not see how paying others to watch out where they were going could possibly be valuable.

Regardless of the reason, environmental scanning remains a part of the heritage of competitive intelligence, particularly contributing to developing and managing early warning systems for competitive intelligence and strategic planning purposes.


Who’s worst?

October 12, 2012

The last several days have seen people in politics raise questions about the validity, or more generously the accuracy and consistency, of recent federal statistics on the unemployment rate, the number of people filing first claims for unemployment compensation, and related data.  Now what will happen, almost certainly, is that this most recent monthly data will be “restated” next month or the month thereafter, a continuous process with flash macroeconomic data in the United States.

Does this mean you cannot rely on US government data?  Almost certainly, yes.  But the US is not unique.  The US is probably just the best of a bad lot.  Consider The Economist’s recent discussions of data in China:

“With China so engaged in the global economy, there is a never-ending stream of data, often unreliable, to feed the appetites of economic-research firms, investment banks, hedge funds, short-sellers, political risk advisors, think tanks, consultancies and financial and military newsletters – not to mention legions of academics, journalists, diplomats and spies.”  Banyan, The Leader Vanishes, September 15, 2012.

“[N]o other important country is as murky [in terms of providing accurate, credible data] as China.” Schumpeter, The summer Davos Blues, September 15, 2012

That China is murky, with respect to data both at the government and company levels, does not excuse the way in which United States collects and processes econometric data.  However, for politicians, businesses, and others, to make decisions based on the movement of 1/10 or 1/100 of some monthly measure from US government statistics is also foolish.

There may be many iron rules about data, but for competitive intelligence, I would propose the following:

Data from only one source is not data.  It is conjecture – until it can be confirmed.

Data based on telephone surveys should be increasingly subject to question.  Our brethren in marketing have already come to this conclusion, given the demographics of the populations that have shifted from landlines to cell phone only service, especially when cell phone users are notoriously difficult to survey.

The smaller the sample is and the more quickly the data is collected, the more likely it is to be inaccurate, and inaccurate in an unpredictable manner.

Combining data sets that are individually unreliable does not necessarily make the conclusion more reliable.  I realize that there are those in the statistics world that would disagree with this, but I do not believe that such aggregations always contain the necessary mutually offsetting mistakes to generate a reliable whole.

Any data that has to be restated should not be relied on at all in the first place, or at least not until it is eventually restated.

Using short-term data to determine the presence and direction of a long-term trend is not forecasting; it is at best guessing and at worst irresponsible.

What is truly ironic that all of this is that the US government releases such statistics, upon which so many rely with so little reason, while it would never allow a firm going public,  such as Facebook[1], to get away with using similarly dubious data.


[1] Linda Sandler, Brian Womack and Douglas MacMillan, “Facebook Fought SEC to Keep Mobile Risks Hidden Before IPO”, Bloomberg, Oct. 10, 2012)

 


Where Did Competitive Intelligence Come From? (Part 1)

October 9, 2012

That is an interesting question.  There are at least two answers.

First, and widely accepted, answer is that competitive intelligence originates with Harvard Professor Michael E Porter’s seminal[1] 1980 work, Competitive Strategy – Techniques for Analyzing Industries and Competitors.  More specifically, intellectually, it grew out of chapter 3, “A Framework for Competitor Analysis” and Appendix B, “How to Conduct an Industry Analysis”. 

From there, it was fed by the presence of a large number of former US government intelligence officers who were look to business for second careers, after retiring from the US Government.  Prime among them was Jan Herring, a former CIA professional intelligence officer, who is often credited with setting up the first corporate competitive intelligence unit at Motorola.  Now, whether the former intelligence officers picked up Porter’s concept or their corporate mentors merely adopted it is not clear and probably not relevant.

From there it was but a short step to the creation of an association (don’t we have an association or group for everything?) by people interested in the subject, the Society for Competitive Intelligence Professionals (SCIP), now Strategic and Competitive Intelligence Professionals.  The founders and early members of SCIP came from a wide variety of backgrounds, including marketing, market research, academia, corporate strategy, advertising, the law, government intelligence, medicine and accounting.  Since then they have, individually and collectively, worked to promote competitive intelligence and to develop it into a cohesive subject to the point where it is being taught in a large number of undergraduate and graduate business level curricula.

Now for the second version, a long story, or more correctly a story that goes back further.  One can argue, and I’m not the first to make that argument, that Professor Porter’s description of how to conduct an industry analysis is merely a description of how good secondary research should always be done in a business context.  The key element is Porter’s focus on competitors and advocacy that companies should be regularly checking on what their competitors are doing and are capable of doing, and acting on that.  But that element was not new.

This version holds that the roots of competitive intelligence go back as far in business as we would like them to go.  We can look at the great European banking family, the Rothschilds, whose agents collected information on the progress of European wars so that they could make market trading decisions before the rest of the market.  That was intelligence, but not directly on competitors.  Or we can take a look at the industrial revolution, with its efforts by French companies to steal technology from English companies in the exploding textiles industry.  Not intelligence, but certainly focused on competitors.

Given all that, competitive intelligence should have existed for a very long time, but has not.  My view is that as companies became more and more focused on their own metrics, particularly since once computers have them that option, over time they lost strategic and tactical focus on anything external except their customers and the market space they were in.

Competitive intelligence and its emphasis on collecting and synthesizing raw data to help companies compete better predates the Internet information revolution, which is still going on. It has exploited it, and may even be enhancing it, but that is a subject for a future post.

 


[1] Seminal comes from the Latin seminalis, meaning influential, and evidently including the concept of being quoted more often than actually read, as in “Karl Marx’s seminal Das Kapital”