Tottenham Report: Algorithms and integrity

September 14, 2021 2:00 AM
  • Andrew Tottenham — Managing Director, Tottenham & Co
September 14, 2021 2:00 AM
  • Andrew Tottenham — Managing Director, Tottenham & Co

Last week, I had the opportunity to participate in an online panel discussion, “AI: Algorithms and Integrity”, organised by the European Association for the Study of Gambling (EASG) and ably moderated by Arjan van‘t Veer, Chair of EASG and Secretary General of the European Lotteries Association.   

Story continues below

Given the increasing use of artificial intelligence that, whether we know it or not, impacts our daily life, the EASG organised this panel to look at issues of integrity and how to gain acceptance of AI. AI is now being used in the gambling sphere for facial recognition and to spot disordered gambling behaviour, in the hope of minimising the harms from gambling.  

Obviously, online gambling is a data-rich environment and AI’s ability to identify and associate patterns in that data with what may be problematic behaviour is far and above what could be achieved from human observation alone.  

My fellow panellists for the discussion on AI included Simo Dragicevic, Managing Director of Playtech Protect and Head of Playtech AI, and Birgitte Sand, former Director of the Danish Gambling Authority and Director of Mindway AI.  

Simo gave a presentation based on a paper he co-authored, “Accountability in AI: From Principles to Industry Specific Accreditation,” though it has yet to be published. The presentation provided an overview of the need for accountability in AI and how this might be achieved. Afterwards, Birgitte and I were asked to comment. We concurred with Simo’s view that, due to a few well-publicised scandals involving AI, unless a process was established to minimise these shortcomings, there was a real risk that public acceptance of AI would diminish.  

To policy makers, politicians, regulators, and the general public, AI is essentially a black box. They know what the developer of the AI says it does, but they don’t know what actually goes on inside the box. Whatever it is must be taken on trust.  

As I have mentioned before in these articles, AI has limitations, such as biases. For example, with machine-learning AI, biases can get “built in” when the data set used to train the AI has some bias in it.   

Facial recognition is very good at identifying white and now Asian faces and not so good at black faces. Why? For one, the software was originally designed by white males. For another, a lot of investment has now been put behind Chinese developers, and they are making enormous strides in improving the technology.  

Quite a few reports about lending algorithms used by banks indicate that they unfairly judge women’s and black and Asian people’s ability to afford debt. Given the same circumstances, women and minorities are likely to have to reach a higher bar.  

A study published in 2019 found that software commonly used in U.S. hospitals was less likely to refer black people than “equally sick” white people to programmes that would improve care for patients with complex needs.  

A series of famous cases are currently the subject of lawsuits, whereby black professional (American) football players were subject to higher standards (called “race norming”) when proving impairment from concussive injuries incurred whilst training or playing the sport, and thereby impacting the size of financial compensation. The list goes on and on.  

If this continues, political support for and public confidence in AI will collapse. Inevitably, there will be a call for regulation. The European Commission has already proposed regulations for the development, sale, and use of AI systems. Other governments are following suit.  

Unless the AI industry gets ahead of this, the opposition will build up a head of steam and the sector will find itself in the same position as genetically modified foods (GMF). Due to a lack of understanding, GMF are currently banned in the EU, even those that can help children suffering from Vitamin A deficiency; Golden Rice, for example, can significantly reduce the risk of blindness and mortality in children. To date, the only country that has approved its planting is the Philippines. And we ban GMF despite the fact that we have been genetically modifying plants and animals for centuries, using selective breeding.  

Clearly, the scale of the risks with AI depends on the type of AI and how it is being used. In the gambling sector, as mentioned, we are using it to identify players who have been barred from entering a gambling establishment and to identify harmful behaviour. AI is also used to promote products on an individual basis that a customer may find enjoyable, but is the AI able to use undue influence? How this fits into the debate on encouraging risky behaviour will need to be discussed.  

Simo and his co-authors’ proposal is to determine the level of risk the particular AI poses and self-regulate accordingly. The proposal includes a system of stakeholder involvement, transparency, external review, and accreditation, along with internal and external auditing to ensure that it does do what is says on the tin without maleficence, bias, or error. Part of this includes human overview and judgement.  

Birgitte took the position that the regulator and the gambling industry should have a dialogue about how AI is being used, how it can be developed, and what the potential could be. Regulators need a greater understanding of how AI can be deployed and what the impacts could be.  

I wholeheartedly agree with this approach. However, as things stand in many countries, regulators take an adversarial position. They see their role as beating the industry into a compliant corner, using large fines, the threat of fines, and the removal of licenses to do so. I do not believe this is healthy for the industry or the regulator. It leads to an “us versus them” culture and best practises are not shared.  

A much better approach would be one that the medical profession is arguing for: no-fault provisions. When things go wrong, and they inevitably will, operators can put their hands up, explain to the regulator what went wrong and why, and what they are doing to put it right, without risk of penalty. Obviously, for egregious or repetitive breaches, the regulator can fall back on punishments.  

I have a few other concerns. We need to understand our own limitations, as well as AI’s. AI can do a great deal, more than humans ever can, but it is a non-sentient algorithm, a purely systematic process. Only by understanding its and our limitations can we deploy it successfully.  

AI may well be able to determine someone whose gambling behaviour is disordered and poses a risk of harm, but it may pull others into that net who do not belong there. By the same token, it may and probably will miss some who should have been identified.  

For example, we hear a great deal about the person who was allowed to lose £50,000 or more without any intervention by the operator. Thankfully, there are very few of those examples. But what we do not hear very much about are those who lose small amounts — all they have for rent or food. AI is unlikely to pick these people up. But for them, the consequences can be just as, if not more than, devastating for the person who lost £50,000. 

Any human intervention is likely to be biased, so the decision that is made is likely to be biased too. We all have biases; we need to recognise and own up to them.  

Audits are far from perfect; we should not rely only on them. Many investors in companies believe that they have been really let down by companies’ auditors.  

Simo and his colleagues have got it about right as it applies to AI in gambling, but perhaps the scope should be expanded to include the points I make above. Additionally, this is a plea for the industry and regulator to enter into sensible discussions about the future of AI within this sector.