Skip to navigationSkip to contentSkip to footerHelp using this website - Accessibility statement
Advertisement

AI misinformation attacks are inevitable, warns US expert

Tom Burton
Tom BurtonGovernment editor

Businesses and public agencies need to rehearse their responses to large-scale AI-powered misinformation and cyberattacks so they can quickly bounce back, a US security expert has warned.

“Historically, we have been hesitant to talk about planning for the inevitable,” said Kris Lovejoy, the global security and resiliency leader for Kyndryl.

Kris Lovejoy, global security and resilience leader for Kyndryl, says firms and agencies need to better respond to cyber breaches. Flavio Brancaleone

“Somehow, we’ve thought that we could protect ourselves by spending enough on protection, that we could prevent the inevitable from occurring,” the Washington DC-based expert said.

Speaking ahead of the Australian Financial Review’s Government Services Summit on Tuesday, Ms Lovejoy said artificial intelligence, especially generative AI which can emulate voices and images, had made all organisations much more susceptible to attack.

“The one thing I’ve learnt in my career in security is that people trust. If phishing tests tell you anything, it is that people will trust anything, they will double-click on everything.

Advertisement

“It’s thinking, all right, there is going to be a massive-scale social engineering scheme that is perpetrated on a category of citizens within Australia. It’s going to happen, so how do I react to that?

“It’s not just about security, it’s also about resilience. It’s about bouncing back once that happens.”

Preparation is key

Ms Lovejoy said Ukraine was a good example of misinformation campaigns being countered by the close monitoring of social media, and rapidly responding with the truth.

“It’s not just about the proactive prevention. It’s also about the reaction in today’s world, and that is the one lesson I harp on, day in and day out, it is you have to prepare to respond.”

This meant running tabletop exercises on scenarios such as an election candidate who is the victim of disinformation, or an election system that is being subverted.

Advertisement

“You’ve got to be cognisant that AI is going to be used in this way.

“There has to be some mechanism by which at the local and federal level that you’re you’re running scenarios and running playbooks.

“It’s making people understand how sophisticated these kinds of social engineering attacks can be. Elderly people in particular is a significant population that is being targeted.”

Ms Lovejoy said her previous work with EY found 60 per cent of the world’s firms had introduced new technology to enable COVID-19 working from home and new ways of working with clients,

“Of that, 60 per cent either abbreviated or altogether skipped over the implementation of security control in and around that technology.”

She said attackers had shifted their attention to large supply chain platforms.

Advertisement

“They are attacking the technology providers themselves and looking for the ways in which they could integrate themselves into the technologies that are being built and delivered to consumers of that technology.

“We don’t realise that those technologies now have backdoors or inherent weaknesses that were implemented and deployed by the attacker so that they would be able to get access into these programs.

“That’s why I think there’s a lot of concern today about the supply chain and the resilience of the supply chain, particularly when it comes to technology vendors.

“There is a sense that a lot of the technology was built and introduced into organisations who hadn’t checked to see whether or not what they were buying had the appropriate integrity and security controls.”

Ms Lovejoy said while many of the enterprise platforms had solid security, some of the apps that connected into the platforms were not secure, especially internet-of-things devices such as beacons and cameras.

“If you think about platforms like SAP or Salesforce, those are platforms where third parties come in, and they build technologies that allow you to access the data,” she said.

Advertisement

“The platform itself might be secure, but the technologies that are being used to integrate into those platforms, those modules, may not be secure.

“We thought it would go away. It’s not gone away.”

Ms Lovejoy’s biggest concern is file sharing sites, many which had been spun up during the pandemic to enable collaboration.

“[Many of] those data repositories were spun up without appropriate authentication requirements in and around those sites.”

This included back-up storage and archive data. “Those sites host a lot of personally identifiable information and sometimes not only personally identifiable information, but sometimes people will collect files with names and passwords in clear text and save them on those sites.

“I think that if you look at the file sharing programs in particular, that’s one area that gives me a lot of pause.”

Tom Burton has held senior editorial and publishing roles with The Mandarin, The Sydney Morning Herald and as Canberra bureau chief for The Australian Financial Review. He has won three Walkley awards. Connect with Tom on Twitter. Email Tom at tom.burton@afr.com

Read More

Latest In Federal

Fetching latest articles

Most Viewed In Politics