News

Report Predicts Rise of AI Based Cyberattacks in Next 5 Years

Artificial intelligence (AI)-based cyberattacks on organizations could start to ramp up over the next five years, according to a recently published report.

AI-based attacks may already be in current practice by nation state attackers, since they have the resources to develop them. Such techniques, though, may gradually get adopted by criminal groups, too, at some point. The report, "The Security Threat of AI-Enabled Cyberattacks" (PDF download), provides analysis on a possible five-year development of such threats.

The report was commissioned by the Finnish Transport and Communications Agency Traficom, but it was written by experts at security solutions company WithSecure. Helsinki, Finland-based WithSecure, formerly known as F-Secure Business, provides consultation services and security services, based on AI, to organizations and service providers.

Current AI-Based Attacks
Public knowledge about AI-based attacks is currently limited to just a few academic institutions. The report cited work by the Offensive AI Lab at Ben Gurion University in Israel, as well as the Shellphish group at the University of California at Santa Barbara.

AI techniques mostly are being used for the social engineering aspects of attacks, at present. It's possible to use AI to impersonate someone's voice, for instance. Moreover, "tools already exist for spear phishing generation and target selection," the report indicated.

Current tools using AI, mostly for early stages of attacks, include "CAPTCHA breakers (e.g. from GSA), password guessers (e.g. Cewl), vulnerability finders (e.g. Mechanical Phish from ShellPhish), phishing generators (e.g. SNAP_R), and deepfake generators (e.g. DeepFaceLab)."

In the next two to five years, it's possible that AI will be used by attackers for open source information gathering on targets, or it'll be used to avoid attack detection techniques. Even the use of AI for defensive purposes could get exploited by attackers, should it get repurposed for attack tools, the report suggested:

Vulnerability discovery also has legitimate applications for bug fixing, and there is a notable effort to develop AI-based solutions for this purpose. This may lead to an increased availability of data usable for this purpose, as well as legitimate vulnerability discovery tools that could be repurposed for malicious usage.

No 'Autonomous Malware' as Yet
The report was skeptical about seeing AI-based "autonomous malware" attacks in the long-term future (five years and beyond). Current AI training techniques aren't capable of such feats, and attackers would likely face technical obstacles, including detection, if they tried.

"Due to these challenges, it is unlikely that we will witness self-planned attack campaigns or intelligent self-propagating worms driven by AI any time soon," the report stated.

However, the report did forecast broader AI-based attacks in the near term as AI knowledge advances. Advanced techniques possibly will get passed down as a consequence of nation-state attackers employing such technologies.

Here's how the report put it:

In the near future, fast-paced AI advances will enhance and create a larger range of attack techniques through automation, stealth, social engineering or information gathering. Therefore, we predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years.

The use of AI-based attacks also could accelerate should defenses for conventional cyberattacks improve. It would motivate attackers to use AI-based techniques, the report suggested.

More interesting comments on the future of AI and security can be found in the 30-page report, which is a free download.

About the Author

Kurt Mackie is senior news producer for 1105 Media's Converge360 group.

Featured

comments powered by Disqus

Subscribe on YouTube