Article Keyword Videos to Watch
Business
Click on the image to start the video.
|
Related Topics
Images - Links - Articles
London
Related Images
|
Should you trust your analyst? (Part III)
The first stage of most decision making in business is gathering data. In most cases the information is collected in the form of words. Once the words are available, the professionals who gather the data perform an analysis of these words, and present the results to the decision maker. Recent scientific research shows that these professionals, most frequently, fail in their analysis of qualitative data. Consider the evidence from a recent scientific study.
A scientific study (Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998 Sep;32(3 Pt 1):310-7) introduced 10 major and 13 minor errors in a fictitious scientific manuscript. The manuscript was sent to all reviewers of the Annals of Emergency Medicine, the official publication of the American College of Emergency Physicians. The Annals has been in print for more than 25 years, and is the most widely read journal in emergency medicine. The work described in the manuscript was a standard double-blind, placebo control study of the effect of the propranolol drug on migraine headaches. The manuscript was reviewed by 203 reviewers. Eighty percent of the reviewers were professors at academic emergency medicine departments, and twenty percent were physicians in private practice.
The analysis of the reviewers' comments produced the following results. Fifteen reviewers recommended publication. The reviewers in this group missed 82.7% of the major errors and 88.2% of the minor errors. Sixty seven reviewers recommended revisions. The reviewers in this group missed 70.4% of the major errors and 78.0% of the minor errors. One hundred and seventeen reviewers recommended rejection. The reviewers in this group missed 60.9% of the major errors and 74.8% of the minor errors.
According to the table, the 15 professors who recommended publication, on average, missed 82.7% of the major errors, and 88.2% of the minor errors. In other words, the professors missed at least 4 out of 5 errors inserted in the manuscript. These errors were defined by the authors as "nonremediable errors that invalidated or markedly weakened the conclusions of the study." It is interesting to note that one of the minor errors included in the manuscript was a misspelling of the drug's name. Out of the 203 reviewers, 30 were convinced in the correctness of the misspelled name and used it throughout their interview. The authors of the study said about the results (with the usual scientific undertone): "the small number of errors identified by the reviewers in this study was surprising. The major errors placed in the manuscript invalidated or undermined each of the major methodologic steps of the study … The identification of even a fraction of these errors should have indicated that the study was unsalvageable, yet the reviewers identified only 34% of these errors, and only 59% of the reviewers rejected the work."
Points to consider:
1. In this study, the reviewers were professors and private practice physicians with an average of 3 years experience as reviewers for the Annals and additional years of experience reviewing scientific manuscripts for 2 other scientific journals, and with 10 years of experience practicing emergency medicine. These reviewers possess a much higher level of expertise in the subject of the tested manuscript relative to even the most experienced market researchers analyzing qualitative customer data, the most experienced human resource managers analyzing candidate data, the lawyers analyzing patents, or the investment analysts and consultants analyzing business data. So, if professors and physicians failed to recognize major errors in a standard scientific manuscript, what are the chances that the less trained professionals will identify gaps and inconsistencies in non-standard qualitative business data?
2. In this study, the professors were expected to identify the technical errors found in the manuscript. The identification and elimination of this type of errors is the objective of the years of training undergone by every scientist. Unlike this study, the great majority of qualitative studies in business include psychological gaps and inconsistencies, and unlike scientists, most other professionals receive little to no training in the identifying psychological errors. If the professors failed to identify most of the technical errors, what are the chances that the less trained professionals be successful in identifying the much more challenging psychological errors?
3. How worried should you be when a market researcher is analyzing your focus groups? A typical focus group holds about 12,000 words. An average manuscript holds about 3,000 words, much less than a single focus group. A typical market research study consists of 4-8 focus groups, or 16 to 32 times more text. So, if the experts in this study failed to identify most of the technical errors in a volume of data equivalent to one forth of a single focus group, what are the chances that a market researcher will identify the psychological inconsistencies (and intellectual inconsistencies) with a much larger dataset?
4. How worried should you be when a human resource manager is analyzing a pool of candidates? A transcript of a one hour interview holds about 6,000 words (when hiring middle and top managers, the interviews might take a whole day with an order of magnitude more words). When interviewing a few candidates, the total data may include 30,000 or more words (for 5 candidates). So, if the experts in this study failed to identify the major inconsistencies in a volume of data equivalent one half of a single interview, what are the chances that a human resource manager will identify the major inconsistencies with a much larger dataset?
5. How worried should you be when an investment analyst is analyzing some companies for you? An annual report might include tens of thousand of words. For instance, the IBM 2004 annual report is 100 pages long and includes more than 65,000 words. So, if the experts in this study failed to identify the major problems in a dataset that holds less than 5% of the data included in the IBM 2004 annual report, what are the chances that an investment analyst will identify the major problems hidden in the much larger dataset?
Summary:
The Baxt, et. al. study shows that professors and physicians, who are highly trained professionals, most frequently fail to identify major technical errors in a standard qualitative dataset, and as a result arrive at the wrong decision. What are the chances that the less trained professionals will outperform the professors at identifying the more challenging psychological gaps and inconsistencies in a much larger non-standard dataset? And, when the professional analysts fail, what are the chances that, although misdirected, you still make the right decision?
About the Author: Mike T. Davis, Ph.D., SCI, Rochester NY
We are the inventors of Computer Intuition™, a psycholinguistics based program that analyzes the language that people use. The program calculates the psychological intensity, or psytensity, of every idea found in the input, and “converts what people say into what people do”™. SCI's clients include many Fortune 500 companies. We also serve many smaller companies and individuals.
In the Science on Decision Making series we analyze papers from scientific journals that include interesting findings on the relationship between decision making and analysis of qualitative data. More reports are available at http://www.computerintuition.com/Reports.htm
|