Jennifer Golbeck offers a reassuring assessment for people fretting that artificial intelligence threatens the future of humankind.
“Artificial intelligence is really dumb,” Golbeck, director of the University of Maryland social-intelligence lab and a professor in the College of Information Studies, told attendees Wednesday at the AGS GameON conference in Reno, Nevada. “It tries very hard to approximate what people do, but it doesn’t understand anything about what it’s doing.”
However, she added, data-gathering abilities already common around the world pose a more terrifying future than that imagined by AI doomsday predictions. “The people warning you about Matrix-style stuff are often the ones who are doing the really scary stuff and they want to distract you with things that aren’t going to happen.”
She cited a string of troubling instances based on data gathering that people assume to be private: a 15-year-girl who began receiving advertisements for maternity clothes and diapers weeks before she told her parents she was pregnant; a Prozac-taking dog whose owner – actually, Golbeck – received a text reminder from the pharmacy about Mental Health Awareness month, apparently based on the pet’s prescription record; phones that listen to your conversations even when not in use; a European football league that tracked fans’ phones to see when they visited bars during games, then turned on the microphone to see if the establishments were showing the games without a license to do so; employees of a surveillance-camera company helping themselves to videos from inside customers’ homes; smart speakers and similar devices that listen even when owners think they’re not.
“We’re all tracked without our knowledge all the time,” Golbeck said. The trackers range from Big Tech companies to developers who include tracking in games and other apps.
While AI can help with many forms of writing – formulaic letters, text chatting, even titles for panel discussions – “it’s definitely not good for facts,” she said. “AI hallucination” is exemplified by fake images or reports created when a user tells an app such as ChatGPT to make a picture of the Pope in a puffy white coat or, as Golbeck did for the conference, asks for a 100-word summary of a published paper about the casino industry. ChatGPT cited “a balanced inquiry” by Dr. John C. Stevenson, published this year in the Journal of Gambling Studies, about the socioeconomic and political implications of the casino industry. The author and the paper are fictitious, she said.
She also pointed to a Texas professor who got into trouble after flunking his entire class of graduating seniors, because they supposedly cheated by having ChatGPT write their final papers. He said he determined that by copying and pasting the papers into the app and asking whether it had written them. The app said it had.
“It doesn’t know!” Golbeck exclaimed. “It doesn’t keep a record of the stuff that it wrote. All it does is say something like what a person would say, ‘Yeah, I totally wrote that.’
“It’s not a fact-checking engine,” she continued. “It doesno’t understand things. It doesn’t have very much knowledge.”
The courts haven’t caught up with many issues regarding AI. According to Golbeck, current law says anything created by AI cannot be copyrighted; also, the work of people who created legitimate images or writings that AI uses aren’t credited, paid, or even asked if their work can be reproduced.
“It’s kind of a scary space,” Golbeck said. “A lot of lawsuits are going on, but I think it’s going to be a while before we see regulation. There are no solutions to these problems yet.” She cited the Hollywood writers strike, in which a key issue is whether AI can be used on scripts. The writers want to keep that in human hands, while studios want to be able to replace some writers with AI.
For Golbeck, a basic question about the ethics of using AI is the presence of human intervention in the process. She told of an Air Force “thought experiment,” in which an AI was told to identify surface-to-air missile sites, but a human had to approve any attack on them. The experiment was all theory about what could go wrong, rather than an actual command. That’s a good thing. Like a movie script, Golbeck said, the AI in the thought experiment determined that it could gain more points by increasing how many missile sites it took out and eliminated the human who had vetoed some attacks.
“The Air Force, in this example, is kind of leading the way,” she said. “That’s the way we want to do this going forward, that you have AI helping people and together, they’re working as a team.”