Artificial intelligence is a “double-edged sword” for cybersecurity

Artificial intelligence is a “double-edged sword” for cybersecurity
Artificial intelligence is a “double-edged sword” for cybersecurity

Employees at his company, technology company Kyndryl, received fake videos from CEO Martin Schroeter intended to trick them into giving their login credentials to fraudsters.

Mr. Villeneuve also saw a friend who runs a small engineering company get tricked when his wife received a voicemail using what appeared to be his voice to make her believe he was in trouble and that she needed to quickly pay a deposit.

“It hit me hard because he is a good friend of mine,” recalls Mr. Villeneuve, head of cybersecurity at Kyndryl Canada.

The attacks were enabled by artificial intelligence (AI)-based software, which has become even more affordable, accessible and sophisticated in recent years.

Despite the threats to cybersecurity, Mr. Villeneuve, like much of the tech industry, is careful not to view AI as an unmitigated evil.

In the fight against cyberattackers, he believes that AI can help as much as it can harm.

“It’s a double-edged sword,” explains Mr. Villeneuve.

As AI improves, experts say there will always be a more efficient or innovative way to try to get past a company’s defenses, but those defenses also benefit from a boost from the part of technology.

“At the end of the day, AI is much better for defenders than attackers,” said Peter Smetny, regional vice president of engineering at cybersecurity company Fortinet Canada.

His reasoning is based on the number of attacks some companies face and the resources needed to deal with or repel them.

A 2023 EY Canada study of 60 Canadian organizations found that four in five had experienced at least 25 cybersecurity incidents in the past year. Indigo & , London Drugs and Giant Tiger have all been victims of high-profile incidents.

While not all cyberattacks are successful, Smetny noted that many companies see thousands of attempts to penetrate their systems every day.

AI makes it possible to manage these attempts more efficiently.

“You may only have four or five people on your team and there are only so many alerts they can handle manually, but the AI ​​tells them which ones should be prioritized,” Mr. Smetny.

Without AI, an analyst would have to manually check whether each attack is linked to an Internet Protocol (IP) address, a unique identifier assigned to each device connected to the internet, which can help trace the origins of an attack.

The analyst would also check whether the person behind the address is already known to the company and how widespread the attack is.

Using AI, an analyst can now query software using simple language to quickly compile and present everything about an attacker and their IP address, including where they may have entered a system and the actions which he carried out.

“It saves a lot of time and gets you moving in the right direction, so you can focus on the important things,” Mr. Smetny said.

On equal terms

But attackers have the same tools in their arsenal.

Dustin Heywood, chief architect of IBM’s X-Force global intelligence agency, points out that anyone with malicious intent can turn to AI to gather data from multiple breaches and build a profile of a target.

For example, if data indicates that a person frequently buys children’s products at Toys “R” Us or Walmart, it could tell a hacker that that person recently had a baby.

Sometimes attackers resort to a practice known as “pig butchering” to fill in missing information.

“A chatbot can start talking to someone, building a relationship using techniques like generative AI. He makes him seem friendly and trustworthy, and then he starts extracting information,” Heywood said.

When attackers obtain financial information, a Social Security number, or enough personal information to gain access to an account, the data can be used to make a fake credit card application or sold to other criminals.

The potential harm is further increased when there is sufficient material to create a deepfake, that is, a clip showing someone doing or saying something they didn’t do. The example of Mr. Villeneuve, where his friend apparently left a message for his wife, illustrates this tactic well.

For smaller targets, AI does most of the work, allowing attackers to focus on victims with higher value.

“A chatbot can talk to 20 people at a time,” Mr. Heywood recalled.

He’s also heard of people using augmented reality glasses that instantly gain information about a person, including their personal data sold on the “dark web,” as soon as you look at them, and others who work to “jailbreak” AI “chatbots” by extracting the personal information that people have entered.

The evolution of attacks convinced him that AI is a “game changer”.

“In the 1990s, it was teenagers, children and students who entered websites. Then, recently, we moved to ransomware, which involved encrypting companies’ computers. Today the focus is on impersonating a person, a very important activity that is powered by AI,” Mr Heywood explained.

The Canadian Anti-Fraud Center said the country had 15,941 fraud victims in the first half of the year, and those incidents resulted in the loss of $284 million. The previous year, there were 41,988 victims and $569 million lost.

MM. Heywood, Smetny and Villeneuve believe that the fight against attackers is not in vain and that companies are taking it seriously.

-

-

NEXT Intel Arrow Lake review analysis shows Core Ultra 200S CPUs are efficiency champions and gaming duds