1352_GAMO_POI9-foto-7 copy

AI as a cyber threat tool: a new era of attacks

In recent months, there has been a significant increase in incidents in which attackers are using AI to carry out highly sophisticated cyberattacks. As many as 78% of CISOs today admit that AI-powered cyber threats have had a major impact on their organization.

Artificial intelligence has become an integral part of the digital world over the past few years. Businesses and organisations can use it to automate processes, increase efficiency or analyse large volumes of data. On the other hand, however, the misuse of AI is beginning to pose a significant threat to cybersecurity. According to a survey by Darktrace, experts identified the lack of skilled experts and limited knowledge or use of AI-based countermeasures as the biggest obstacles to defending against AI attacks.

Cyber attackers have quickly figured out how to make AI their most dangerous ally in a variety of ways. One of the most common is generating phishing campaigns. According to VIPRE Security Group, up to 40% of all corporate phishing emails last year were generated by AI. Thanks to language models such as ChatGPT, attackers are able to create grammatically correct, localized and personalized emails that often mimic corporate communications or official announcements.

These messages are designed to persuade the recipient to click on a malicious link or provide sensitive information. As a study published on arXiv (2021) reports, generative models increase the effectiveness of phishing attacks and reduce the demands on the language proficiency of the attacker. Meanwhile, the most commonly used techniques include spear phishing, email spoofing, email manipulation and phone phishing.

A new generation of cyber threats

An even more worrying trend is the use of deepfake technologies. Artificial intelligence now makes it possible to generate videos and voice recordings in real time that can mimic real people with a high degree of accuracy.

The misuse of these technologies leads to the creation of false statements by executives, politicians or celebrities, which are then used to blackmail, manipulate public opinion or gain access to systems through voice phishing.

An increase in incidents involving deepfake technology was reported by up to 61% of organisations in the past year, with three-quarters of incidents involving an impersonation of the CEO or another member of senior management. Experts believe this type of fraud will increase by more than half by 2025.

AI-driven malware

AI is also increasingly being applied to the spread of malware. AI-driven adaptive malware can learn from its environment, change behaviour in real-time and evade traditional detection mechanisms. Some types of malware use behavioural patterns to remain undetected or only activate under well-defined conditions. According to a study published in SpringerLink (2023), such approaches reduce the success rate of antivirus tools and increase the dwell time of an attack in the network. In the last year, the number of AI-generated malware incidents has increased up to 1.25-fold.

Automated vulnerability scanning is also an important aspect of AI exploitation. AI models trained on historical data can predict vulnerabilities in systems with high accuracy and suggest ways to exploit them. According to Fortinet’s report, active scanning in cyberspace has reached unprecedented levels, with a 16.7% year-over-year increase and an average scan rate of 36,000 scans per second. This massive data collection enables accurate mapping of exposed services such as SIP, RDP, and OT/IoT protocols such as Modbus TCP. With this information, AI models are able to operate in real-time and independently manage the exploitation of zero-day vulnerabilities, increasing the success rate of attacks and reducing the reaction time of defenders.

How to protect yourself from attacks

Rapid technological developments and escalating threats are forcing organizations to respond by implementing advanced security solutions in a timely manner. One of them is the implementation of custom AI tools for anomaly detection and attack prediction. At the same time, there is a need to increase employee cyber literacy and implement Zero Trust Architecture principles.

However, many organizations still rely on traditional strategies – as many as 41% of enterprises continue to use endpoint detection and response (EDR) tools as their primary defense against AI attacks, even though more than half of respondents to the Ponemon Institute survey admitted that these solutions are ineffective against new types of threats. Despite these limitations, nearly a third of organizations plan to further increase investment in EDR tools. However, in the context of increasing threat complexity, the need to regularly audit AI systems and ensure transparency of the algorithms being used in the corporate environment is also increasingly emphasized.

The exploitation of AI in cyber-attacks is not a topic for the future, it is a reality that is already shaping the shape of digital threats today. Businesses that fail to prepare for these challenges risk not only losing trust, but also taking a hit to their operations and reputation. Early identification of risks as well as an adaptive security strategy are therefore becoming a necessity.

Published: 24. June 2025

Diana Filadelfi

Obchod

GAMO a.s.

This article is part of magazine no.

Published: 24. June 2025

advertising

Peter Blažečka

ESET, spol. s r.o.

Sometimes it happens that attackers manage to deploy ransomware on a company network despite strong security. But even then, sensitive...

Martina Kormaník

GAMO a.s.

The amendment to the Cybersecurity Act transposing the NIS 2 Directive has also brought new obligations for food processing and...

Zuzana Holý Omelková

GAMO a.s.

If organisations want to avoid mistakes, it is essential to take a systematic and responsible approach to the implementation of...
advertising