TribalNet: Experts call for AI tools and training to ward off casino cyberattacks

September 19, 2023 9:40 AM
Photo: Photo CDC Gaming Reports
  • Buck Wargo, CDC Gaming Reports
September 19, 2023 9:40 AM

Cyber experts told a tribal technology conference that artificial intelligence tools can spotlight perpetrators of a cyberattack and whether they’re targeting tribal casinos, while the failures of MGM Resorts International and Caesars Entertainment to ward off attacks may be blamed on IT staff who require more intense training.

Story continues below

Steven Boesel, a customer engineer for Google Cloud, was among those who spoke Monday during a panel discussion on how AI is a new threat and challenge for cyber security. The room was packed with about 200 people during the opening day of the TribalNet technology conference in San Diego.

Detection AI will help casinos do proactive hunts for attackers targeting operations, but more needs to be done, Boesel said.

“We know the end users and contractors are your biggest vulnerabilities,” Boesel said. “Great AI tools can detect malware and what it’s targeting. Whole data sets highlight who the attackers are and whether they’re targeting tribal casinos, or nations, or individuals. What happened to MGM and Caesars was a failure in the end user.”

Boesel said casinos have to protect against this avenue hackers will be taking advantage of: using generative AI to create a voice that sounds like someone else.

“I can call (the IT desk) who thinks I’m me and share information,” Boesel said. “These types of technologies are definitely coming, so being able to detect them is key. A culture of security has to be pervasive across your contractors. All of these AI tools wouldn’t have necessarily solved MGM and Caesars by themselves. The immediate takeaway is education first and foremost.”

The cyberattack on MGM is entering its second week and continues to impact its properties nationwide as it gradually recovers from shuttered slot machines and ATMs and stalled check-ins and payments. Russian hackers are believed to be behind the attack.

In a note to investors, gaming analyst David Katz with Jefferies Equities Research said the company could be losing 10% to 20% of its $42 million per day in revenue to the attack, $4.2 million to $8.4 million, but noted that insurance should help recover the losses.

Caesars avoided those daily losses but is reported to have paid ransomware to hackers after its system was infiltrated in late August. That data included Social Security numbers and driver’s licenses. Speculation within the cyber community suggested that the payment ranged from $15 million to $30 million.

Manjit Singh, the CEO of DruvStar, said as complicated as the problem is with cybersecurity, it’s possible to minimize the impact of cyberattacks.

“The main thing is to have a comprehensive program and do this every day,” Singh said. “It’s like brushing your teeth. It’s not like going to the dentist twice a year is going to solve it. You have to understand your risk and how you’re improving your risk profile.”

Scott Melnick, who currently leads the security research and development department for Bulletproof, a GLI company, and is known as a white-hat hacker, said how MGM was hacked isn’t known for sure, but he raised a scenario with the audience on how AI could have been used.

“This could be a social-engineering attack with somebody calling up tech support and getting a password reset,” Melnick said. “One of the things we do (in hacking scenarios) is research the person. I can go on the social media of someone in tech support at MGM and call and wait to get that person who I collected data about on Facebook, including everyone they work with. I call tech support (and act like I know them and cite personal information). I say I can’t get in the system and my password is locked off. It’s a plausible story. I can use AI to find the database and social engineer the person like we’ve known each other forever. The defense against that is to verify, verify, verify — no matter who it is.”

Boesel said the line between “make believe and reality is getting blurrier every day.” Deep fakes on social media rile people up with false information, but on the cybersecurity side, it’s especially dangerous; it can create a virtual person, he said.

Recently, AI was suspected to have been used to support casino thefts in Nevada, where thieves may have used it to impersonate people and mimic known voices to allow scammers to make off with $1.17 million at the Circa Las Vegas. It’s also suspected of targeting other properties in Laughlin and Mesquite.

“It’s going to be harder and harder to validate who somebody is based on what you’re just seeing with your eyes and ears,” Boesel said. “You’re going to have to rely on technology, looking for the small mistakes. That’s something we’re focused on very deeply at Google —detecting very minor fluctuations in voice and manipulation of that and video. That’s a futuristic problem, but it’s not all that far off.”

Melnick said when he’s asked what can be done about social engineering and deep fakes, he suggests companies test their employees.

“A lot of smaller organizations have an advantage. You’re not dealing with 500 people on tech support that you have to worry about,” Melnick said. “I’ve worked with a lot of tribes for a long time and I understand budget concerns. But you have to think about it like insurance. Everyone I’ve worked with in tribal nations on ransomware has ended up paying 10 more times than it would have to train and test their people. Cybersecurity spending is a hard pill to swallow, because you can’t say this is how many attacks we avoided. It’s hard to prove a negative. It’s like a vaccine. The idea is to keep doing it every year for that year’s flu strain and test your employees over and over again.”