Truth, Lies and Statistics

Are You Biased? The short answer to this question, as we have already established, is Yes. The long answer is still Yes, but with the caveat that biases may be understood, controlled for and minimised – if indeed that is your intention – but not eliminated entirely. Take the example of the airline industry. The United States airline company Boeing reported over 30,000 passenger fatalities in their 2016 annual Statistical Summary of Commercial Jet Airplane Accidents report. Wow – that is a huge number of deaths of American citizens in a single year. Clearly it’s not safe to fly. Except that it’s not true. What is true is that Boeing reported over 30,000 deaths in their 2016 annual report. By carefully choosing my words, I accidentally on purpose made it look like all these deaths occurred in 2016. They didn’t. In fact Boeing reported that these were the cumulative deaths from 1959 to 2016, and not just in the US, but worldwide. The average figure (for what it’s worth – it’s ridiculously easy to lie with averages, as we shall see later) is just 526 worldwide airline passenger deaths per year, and the annual accident rate per million departures has declined annually from 50 in 1959 to around 1 in 2016. Clearly it’s safe to fly – and becoming increasingly so. You see, I told the truth, but I did not tell the whole truth. I deliberately deceived by using a number of different biasing techniques. I was highly selective in reporting only the facts I wanted you to see and added in some other stuff to artificially inflate the figure in the reader’s mind. And I did this without actually telling a lie. Never mind that Boeing’s numbers were recorded, tallied, analysed, calculated, graphed and nicely reported, if you have a mind to misrepresent the data it is not difficult to do so. Newspapers and politicians do it all the time and they are constantly being rewarded for it (more newspaper sales, more votes). Bias is a type of error in which a measure or observation is systematically different from the ‘truth’ (whatever that is). It can affect any stage of the research process, from your literature review to measuring and recording your data, from analysing and interpreting your results to publishing them in a journal, thesis, report or newspaper article.
Imagine you sent out questionnaires to a thousand households, asking a single, simple question: Do you like to answer questionnaires? I hope it’s obvious to you that those people who do enjoy questionnaires will answer that they do, while those that do not will file the paperwork carefully in the nearest waste-paper bin. The former group of people will have self-included themselves in the study, while the latter will have self-omitted. Any analyses, results and interpretations based upon these flawed data will have almost zero validity. This may be a very extreme example, but selection bias like this is deliberately inserted into data every day by unscrupulous commercial organisations that are more interested in selling their wares to you than presenting the truth. Like the dieting programme that rejects the data of all those that drop out of the trial – most of those that drop out likely do so because it wasn’t working, whereas those that successfully lose weight are likely to stay in. Report your results, data be damned! It is not just a simple matter of saying ‘I’m not biased, so therefore I won’t bias my study’, though – bias may be conscious (deliberate) or unconscious
accidental, careless or negligent), and there are literally hundreds of ways in which you can bias your research. And ignorance is no excuse. If you’re not experienced in data gathering, a more experienced practitioner will easily spot bias in your processes and tear your study to shreds, rendering your 3-year study worthless. Not a nice feeling! And just because you’ve made calculations to sixteen decimal places, don’t think for one second that your numbers are in any way valid. A calculation based on poor assumptions and flawed measurements, however accurate it may seem, is just as guilty of being biased as if you’d just pulled the number out of a hat. Incidentally, a few years back I did some analysis for a colleague who wanted me to check some of his results. I shall call him Paco, for no other reason than that is not his name. Despite having precisely the same data and having used the same analysis methods, I was unable to verify some of Paco’s results. When I asked why some were statistically significant in his analyses where they were not in mine, he replied ‘well, if I exclude these 2 patients from the analysis the p-value becomes significant’. On enquiring why those patients should be excluded from the analysis, he said ‘because when I do, the p-value becomes significant…’. Oh dear – my good friend Paco was guilty of omission bias. He didn’t make things any better for himself though; he was quite unrepentant, insisting that there was a very good scientific reason why he should exclude these patients from the study, but that he just hasn’t found it yet!
Another way of adding bias into your study is by consistently understating or overstating a particular measure. If measurement errors are systematically biased you will be guilty of measurement bias, also known as observational bias. You might spot this at unscrupulous weight loss clubs where they ‘fix’ the scales at first weighing to read a higher weight, then alter the scales to deliberately record a lower weight at subsequent weigh-ins. This ensures small but consistent weight loss is recorded, keeping the happy customer coming back for more. A real-world example of measurement bias came in 1628, when the Swedish ship, Vasa, sank less than a mile into her maiden voyage, culminating in the deaths of 30 people. A recent investigation into the sinking of what was considered to be the most powerful warship in the world has discovered that the ship is thinner on the port side that the starboard side. Apparently, the workers on the starboard side used rulers that were calibrated in Swedish feet (12 inches), while workers on the port side used rulers calibrated in Amsterdam feet (11 inches). Of course, in these days of great computing power and automated software programs these kinds of mistakes just don’t happen. Or do they? A catastrophic and very expensive example of measurement bias came in 1999, as the Mars Orbiter was lost as it travelled too close to the planet’s atmosphere. An investigation said the cause of the loss of the $125m probe was because the NASA team used metric units while a contractor used Imperial measurements. In other words, despite an abundance of space in that region of the solar system, the probe missed and crashed into Mars instead. Oops… I particularly enjoy the story of the British rock band Black Sabbath, who, in 1983 had a replica of Stonehenge made for their stage show. Unfortunately, it was so big that it got in the way of the band and very few of the replica stones would fit on the stage. The legend goes that there was a mix-up between metres and feet, so the entire structure was three times bigger than it should have been. This was parodied the following year in the riotous This is Spinal Tap mockumentary, where the rock group ordered a model of Stonehenge for their stage show, but the note written on a napkin said 18’’ (18 inches) rather than 18’ (18 feet). When it comes to reporting results, you can bias the outcome by highlighting favourable evidence that seems to confirm a particular position while suppressing evidence to the contrary. This is known as cherry picking, also known as suppressing evidence, and may be intentional or unintentional. In public debate, though, and particularly in politics, it is rarely unintentional.

After Donald Trump’s inauguration as President of the United States, the White House Press Secretary, Sean Spicer, held a press briefing and angrily accused the media of deliberately underestimating the size of the inauguration crowd, stating that it was the “largest audience to ever witness an inauguration – period…”. To back up his claims, Spicer (who was subsequently – and mercilessly – parodied by comedienne Melissa McCarthy on the US comedy show Saturday Night Live) claimed that 420,000 people rode the Washington DC Metro on inauguration day in 2017 compared to 317,000 in 2013. This statement led to a series of highly public (and, to the casual observer, highly amusing) arguments between the press and the White House staff about the meaning of the words ‘truth’ and ‘facts’. When Spicer quoted the number 317,000, he was attributing it to Barack Obama’s 2013 inauguration, and it was a correct number, although it was cherry picked.

Full book available on Amazon Kindle.
Business Needs strategic MIND.

KhD Business.

Comments

Popular posts from this blog

MARKETING: THE ULTIMATE FREE MARKETING TOOLS 2019

SEO MARKETING: 10 PROVEN STEPS TO S.E.O

500 SOCIAL MEDIA MARKETING TIPS: INSTAGRAM