AI’s Cyber Threat: Balancing Security & Innovation

AI tools lower the barrier to entry for cybercrime by enabling less experienced attackers to launch attacks they wouldn’t otherwise have the skills or knowledge to carry out. For instance, individuals who lack programming skills, can now simply ask AI tools like ChatGPT to write bots that automate the process of breaching servers. While these attacks may not be novel, they still increase the volume of potential threats companies need to defend against, draining the resources of already underfunded security teams.

Striking the right balance between innovation and security 

As AI tools become more embedded in business operations, the stakes grow even higher. For instance, KPMG’s recent survey of financial leaders revealed that 84 percent plan to increase their investments in generative AI (GenAI).

While they and presumably other industries are in the process of accelerating  the adoption of AI tools, the World Economic Forum reports that nearly 47 percent of surveyed organisations have already noticed adversarial advances powered by GenAI as their primary concern, enabling more sophisticated and scalable attacks. Moreover, the same report states that only 37 percent of organisations have processes in place to assess the security of AI tools before deployment.

Meanwhile, the EU’s AI Act, which aims to regulate high-risk AI systems, is being phased in over several years, with full implementation not expected until 2027. However, there is a growing debate in Europe about how to balance regulation with fostering innovation. During

the Paris AI summit, French President Emmanuel Macron remarked that Europe might reduce regulatory burdens to allow AI to flourish in the region.

This presents a potential challenge: while Europe struggles with over-regulation concerns, its wait-and-see approach might cause them to miss the boat as AI technology evolves at an incredible speed. By the time the AI Act is fully in place, we could be facing an entirely new wave of AI-powered cyberattacks, many beyond the scope of current regulations.

So, what does this mean for  cybersecurity if AI is regulated by a light-touch regulatory framework? While innovation is essential, the absence of security-focused regulation means  AI tools are already in the hands of cybercriminals who can weaponise them with minimal oversight.

At the moment, the capacity of AI systems for automating and optimising cyberattacks already extends far beyond aforementioned phishing. AI-powered tools can be used to exploit vulnerabilities in critical infrastructure systems, launch bigger Distributed Denial of Service (DDoS) attacks, or even manipulate financial markets. In 2023, the US Department of Homeland Security issued a warning that AI-powered systems could soon be capable of launching autonomous cyberattacks that are difficult to counteract using conventional defence mechanisms. Such threats present a security nightmare that policymakers can’t afford to ignore.

If AI systems evolve to the point where they can autonomously compromise digital infrastructure, we could see an escalation in both the frequency and severity of cyberattacks, potentially crippling global systems.

Cybersecurity must evolve – Now!

Whether AI is robustly regulated or not, businesses should do more than a bare minimum for cybersecurity. First, it’s essential to invest in additional, AI-driven security tools rather than replacing existing ones with AI-powered solutions. While AI and machine learning can be incredibly useful for detecting and preventing attacks in real time, they can also make incorrect decisions. AI should serve as an additional resource to enhance cybersecurity efforts, not as a replacement for traditional tools. By analysing patterns in network traffic, AI can identify anomalies that may signal a breach. As cyberattacks become more automated, AI can help security teams identify threats faster and more efficiently, allowing them to do more with the same amount of resources.

Another step is to start incorporating AI threat modelling into security protocols. AI can be leveraged to predict and prevent attacks. Security teams need to think like attackers, using AI to simulate how their systems might be breached and proactively patching those vulnerabilities before they can be exploited.

Finally, companies must invest in continuous training for their security teams. As AI-driven attacks evolve, it’s not enough to simply rely on firewalls and antivirus software. Security professionals need to be prepared to deal with more sophisticated, AI-powered threats. This includes staying ahead of trends, understanding how AI tools are being used against them, and developing strategies that go beyond traditional defences.

Undoubtedly, AI has the potential to revolutionise cybersecurity and every other industry, but it also introduces a new wave of risks. While policymakers may be caught up in the AI race, cybersecurity professionals must act now. AI can be an ally in the fight against cybercrime and in enabling business operations, but it can also become an adversary if left unchecked. As we race toward a future shaped by AI, securing our systems against its darker side should be a top priority.

Aras Nazarovas, a Cybernews security researcher, investigates cyber threats, malicious campaigns, and hardware security. His work has exposed major vulnerabilities in platforms like NASA, Google Play, and PayPal, helping businesses and consumers mitigate risks