Tottenham Report: AI, behind the hype

Monday, April 15, 2024 11:00 PM
Photo: Shutterstock
  • Other

There has been a lot of hype recently about AI. Suddenly, many, though not all, companies have been promoting the idea that their products contain or are driven by AI. What is AI and what can it do? 

In 1950, British mathematician Alan Turing proposed a test to determine if machines could think. He later changed it to whether a machine had the ability to exhibit intelligent behaviour. 

The test was fairly simple. A person, the participant, sitting at a screen entered into a conversation with two other entities. All were hidden from one another and the conversation was text based. The participant knew that one of the other entities was a machine, but didn’t know which one. Through conversation, the test subject had to determine which was human and which wasn’t. 

Obviously, the test needed to be run quite a few times to see if the machine was really able to “exhibit intelligent behaviour.” But would the machine really be intelligent? Which begs the question, what is intelligence? 

Even with ChatGPT (see below), humans can correctly pinpoint the machine more than 60 percent of the time. 

IBM developed a supercomputer called Deep Blue that was programmed to play chess. It was so good at chess that in 1997, it beat world chess champion Gary Kasparov in a six-game match: two games to Deep Blue and three ties.  

The machine had been taught strategies and to “think through” up to 20 moves ahead, which can involve hundreds of millions of different combinations. What it was not able to do was modify its playing strategy during the game. This had to be done by IBM programmers between matches. Kasparov claims IBM cheated by intervening during the matches and causing him to lose, a claim the Deep Blue team deny. 

Since then, with the development of neural networks (the ability to weight possible outcomes based on previous outcomes) and super-fast computers, there is no game at which computers do not beat humans.  

The most recent development turned loose on the unsuspecting public is machine learning. These systems use algorithms and statistical models to find, analyse, and draw conclusions from patterns in extremely large data sets. Humans are good at finding patterns, but when the data sets are so enormous and with so many variables, it is impossible for us to sift through the data and draw conclusions in a timely fashion. 

Large language models like ChatGPT are good at specific tasks, such as writing code, emails, and papers on certain subjects. However, although they are improving, they do have their shortcomings. For example, some lawyers, to their detriment, have tried to use them for court filings, only to find some of the citations presented are not relevant to the case. 

The data sets that “intelligent” computers use to learn are extremely important. Turning computers loose on the World Wide Web creates a challenge: what to do about false information? How does the computer know where to look and what information it should take account of and what it should discard? 

Today, the internet is awash with false information. It is all around us and easy to access. Some people believe conspiracy theories, because it reinforces what they already have chosen to believe, a.k.a., confirmation bias. Hence the permanence of the most ridiculous conspiracy theories. Even when you point out the considerable flaws in the “theory”, most believers will not change their view. Similarly, neural networks that repeatedly encounter the same false information can themselves redistribute that false information. 

A 14th-century philosopher, William of Ockham, posited that an explanation that uses the smallest set of elements is likely to be the right one. This test is now called “Ockham’s razor” or more commonly “Occam’s razor”, and is commonly stated as “the explanation that is the simplest is probably the correct one”. 

It is not a simple thing for algorithmic machines to determine whether information it encounters is true or not. Different techniques have been developed, including using the number of degrees of separation between the subject and the object of a statement. The more degrees of separation, the more likely the premise is to be false. Using this method, computers can correctly determine false information, but not 100 percent of the time. The best today predict with about 77 percent accuracy; the answer is probabalistic. 

But do these systems display intelligence? Obviously, it depends on your definition of intelligence. At its simplest level, psychologists define intelligence as the ability to recognise problems and to learn and apply learnings to solve problems. Using that definition, computers are intelligent. 

I would argue that even in the simplest definition of intelligence, you need to add the ability to reason, along and knowing that you do not know enough about something and need to research it further before being able to draw sensible conclusions.  

Are computers able to reason? Do they “know” what they do not know? I think, not yet. Both reasoning and “knowing”, I believe, require consciousness, which these systems do not yet possess. 

In conclusion, we should not blindly trust the output of neural networks or machine learning, just as we should not simply trust all that we read, especially on the internet. Human intervention is still required. A healthy dose of scepticism and the application of common sense will help us get to the right conclusion, but sadly not all of the time.