Tottenham Report: Do as I say, not as I do

Tuesday, March 12, 2024 11:00 PM
Photo:  Shutterstock
  • Commercial Casinos
  • Igaming
  • Sports Betting
  • Andrew Tottenham — Managing Director, Tottenham & Co

A few weeks ago, Eilers & Krejcik published a report that investigated the impact of online gambling on the land-based industry. It found no cannibalisation of land-based gaming revenues. Brian Wyman, a partner in The Innovation Group, took the report to task, posting on LinkedIn to say that he thought the E&K methodology was flawed, not quite accusing the authors of cherry-picking the data; he referred to a prior report published by The Innovation Group that came to the opposite conclusion.

And then the inevitable happened. LinkedIn is a social-media outlet and many people piled on with their opinions as to whether what Mr. Wyman had reported had any veracity. Most commentators from the igaming industry made irrelevant remarks, such as how online gambling would allow land-based operators to take market share; others wrote about the jobs being created from the opening of online gambling in a jurisdiction, which would offset the loss of land-based jobs. Clearly, they did not understand the hypothesis being tested. Debate is a lost art and social media has a great deal to answer for.

My own view is that they were probably meaningless. How so? Methodologies differ and can give different results. Also, it is extremely difficult to look at economic impacts in isolation. Many things happen in economies of states and countries that impact spending patterns of residents. Trying to isolate one particular cause is extremely difficult. Add to that the different timeframes of the studies and I would be very surprised if they did come to the same conclusion.

It is easy to misunderstand what statistics might be telling us and it is easier to misuse them. The following shows how changing methodologies for statistical sampling without rigorous testing, comparison, and analysis can lead to significantly different outcomes.

Historically, rates of participation in gambling in Great Britain were measured quarterly, using telephone surveys. An annual survey of rates of problem gambling in England was commissioned by the National Health Services for England, carried out by NatCen, using face-to-face interviews as part of a wider array of questions about health, activities, etc. This methodology was considered the “gold standard” by epidemiologists.

The other countries in Great Britain (i.e., Wales and Scotland) were added in 2010, using separate surveys with different timing and questions, making it difficult to align all the responses to get a good picture of what was going on. However, one thing was clear: Rates of problem gaming had not really changed much over two decades or more, possibly declining slightly.

The GC understandably wanted greater flexibility (they had no control over the health survey) and a more cost-efficient method of delivering the survey; face-to-face and phone surveys are expensive. After a consultation, they embarked on their own pilot survey, but the methodology was changed to an online questionnaire with a paper-response option.

Statistical sampling is not easy. Who you ask, what you ask, when you ask, how the survey is delivered, and how you ask it can all have a large impact on who responds, how many respond, and what they say.

The pilot survey was carried out in 2020 and achieved a response rate of 21%, significantly lower than the annual NHS survey. The pilot survey found that 63% of the public had gambled in the previous 12 months compared to 54% in the 2018 survey and the prevalence of problem, moderate-risk, and low-risk gambling three times higher than the 2018 survey.

The GC worked to improve the method of its survey to better represent the household makeup of Great Britain and actual gambling behaviour. The new full Gambling Survey for Great Britain (GSGB) was launched in the middle of 2023 and received a response rate of only 17% (3,774 responses). This survey found significantly higher rates of moderate-risk and problem gambling compared to the 2022 pilot survey.

How can two surveys produce such differing results? Clearly, they cannot both be right. It really comes down to methodology. It is possible that the NHS surveys understated problem gambling and the associated harms. It is also more than likely that a voluntary targeted online survey, asking only about gambling behaviours, is likely to get a higher response rate from those who have an active interest in gambling, whether their view is positive or negative. Those who gamble occasionally or not at all may choose not to respond to the survey, hence the low response rate and the higher gambling-participation rate.

Most experts who study why people think about taking or attempting to take their own lives will tell you that it is not possible to put it down to one cause. Nevertheless, the survey contained questions about suicide and suicidal thoughts. This is an example of framing. In a survey solely about gambling, questions about whether a participant’s gambling has caused thoughts about or actual attempts to take their own life will answer only that question and not whether other things, such as depression, problems with relationships, disordered eating, etc. were also factors.

The GC asked Patrick Sturgis, a professor of quantitative social science at the Department of the Methodology, London School of Economics to carry out an assessment of the GSGB. The resulting report was published in February 2024.

The Gambling Commission of Great Britain (GC) became so concerned about the misuse of statistics in the “debate” (and I use that term very loosely) around the re-regulation of gambling in the UK that the CEO publicly stated that the GC would call out entities that misquoted statistics to their own ends. The British Gaming Council received a public rebuke and Gambling with Lives, a charity concerned with suicide “caused” by gambling, also received a letter.

It is strange, then, that the GC responded to the assessment in the way that it did. Under the headline “Independent assessment endorses Gambling Survey for Great Britain,” The GC trumpeted that, according to Professor Sturgis’s assessment, the GSGB “has been exemplary in all respects”. This is a bit like those posters for West End shows that have “Amazing, The Times” in big bold letters, when the actual quote was, “It is amazing that this show was ever produced!”

What Professor Sturgis actually wrote was, “My assessment of the development of the Gambling Survey of Great Britain (GSGB) is that it has been exemplary in all respects.” It is the “development of the GSGB” (my emphasis) that was exemplary, not the survey itself.

This has been pointed out to the GC, but the GC has refused to back down. Even the GC’s CEO, Andrew Rhodes, publicly refused to confirm or deny that the statistics, which will be used to support policy, are reliable but last week at a public meeting with stakeholders he said he does stand behind the statistics. They plan to publish the results of the new survey anyway, i in July.

I can see the headlines now. “Problem Gambling Rates Rocket to Record Levels”. The industry is in for a kicking, even if the GC provides some kind of health warning about the veracity of the findings.

What the GC did not publish or comment on was this observation from Professor Sturgis in his assessment, “Until there is a better understanding of the errors affecting the new survey’s estimates of the prevalence of gambling and gambling harm, policy-makers must treat them with due caution, being mindful of the fact there is a non-negligible risk that they substantially overstate the true level of gambling and gambling harm in the population.” But why let a few errors get in the way of a “successful” survey if it meets the needs of the GC?

Nor did they mention the seven recommendations that the professor proposed that would improve the survey.

The GC has made it much harder to row back from its position. If the NHS annual survey continues to show a problem gambling rate that is less than one fifth of the rate shown by the GC survey, how will the GC respond? Will it say that its survey and methodology are better, and the NHS has been wrong all these years?

There seems to be a bit of a habit forming here. The Chair of the Advisory Board for Safer Gambling (a body that provides independent advice to the GC on gambling harms), Dr. van der Haag, has said, and I paraphrase, that whether a report relies on accurate data or not is beside the point, so long as it meets the needs of people and families harmed by gambling.

So evidence-based policy making is a good thing, as long as those advising the policy makers get to cherry pick the evidence. A case of, “Do as I do, not as I say”.