Insights

Artificial Intelligence (AI) & Cyber Defence

Author:

Paul Harris

If you’re on LinkedIn, you won’t have failed to notice a bit of buzz around ChatGPT and other such AI technology. And when I say a bit of buzz, I mean every other post being about it. Whether it’s predicting the beginning of the end for writers or the start of a new dawn for content creation, everyone seems to have an opinion on how AI technology could disrupt, innovate or destroy everything from specific job roles to entire industries.

Like any new, potentially ‘disruptive’ technology, the possibilities are both exciting and frightening. But for most, optimism wins, seeing it as a tool which can make our lives easier and more efficient. No longer do you have to sit down and spend hours writing pieces, such as this one. Give AI a subject and it will write it for you. Need a custom image, AI can create it. Want to draft a supplier security questionnaire based on specific requirements, such as ISO 27001, ChatGPT can provide you with an amazing starting point. (Just be careful what you input).

Minimal effort, maximum result. That’s the hope anyway.

AI is already starting to have an impact when it comes to cyber defence, and organisations, as well as nations, are now starting to invest significant resources into the technology, hoping it will one day provide maximum protection, with minimal effort, freeing up time to focus on other areas, safe in the knowledge that this new tech has their back. Many believe that future is getting close and closer, and that AI based security solutions could allow us to detect threats more effectively and reduce incident response times drastically.

So, is AI the answer to our security problems?


Whilst the future of AI seems extremely promising in terms of cyber defence, it doesn’t come without its drawbacks and its problems. Here, we outline just a few of those concerns:

Data set/Data manipulation

AI relies on larger datasets and existing human models. It can therefore only learn from, and detect, known models, attack routes and signals. But what happens if the dataset is incomplete, biased or has been poisoned? What if an attack doesn’t use a known route or uses stolen credential to gain access to a network, masking malicious activity under the guise of a legitimate user? In these cases, AI technology is unlikely to be effective, as it may not be looking out for them, allowing attackers to bypass initial defensive technology without detection.

AI isn’t just helping defenders.

One of the main concerns raised about AI and cyber security is that attackers can also use the technology to their advantage, helping them craft convincing phishing campaigns, create, adapt and improve malware, as well as use it as a tool to develop new attack routes, ones that could bypass traditional defences and even evade other AI detection.

Explainability

As AI develops and models become more complex, understanding why an action was taken, or not taken, may start to become more difficult, especially when reliant on AI technology developed by third parties. For example, why was one threat flagged as malicious and one as safe?

Understanding the reasons behind any AI decision is important from a defensive standpoint, and without such understanding you’re trusting that the technology is making accurate decisions and that no flaws are present in the underlying model, which can be a dangerous position to be in.

Still susceptible to ‘simple’ attacks

Whilst the future of the technology is promising, it is still developing. In fact, much AI and machine learning technology is still susceptible to simple attacks that can cause the technology to misidentify input and produce potentially dangerous outcomes.

One of the most talked about examples came a few years ago when a research study showed how it could be possible to confuse a driverless cars AI and machine learning programmes into thinking a stop sign was a ‘speed limit 45’ sign, with just a few strategically placed stickers. (You can read about it here).

How do organisations face the security challenges?


AI technology is only likely to improve over-time and will bring a host of benefits when it comes to cybersecurity, however, like all tech solutions, it will never be a silver bullet to all our security issues. Attackers, as always, will continue to adapt their approaches, looking at ways to take advantage of new technology developments.

Using technology solutions in isolation is therefore unwise and should be complimented with more manual security processes and testing approaches such as penetration testing and red teaming. By adopting a combined approach, you get the best of both worlds. So, whilst automated tools are detecting and preventing known attacks, more in-depth, manual testing can provide assurances that your defensive technologies are robust, help uncover potential vulnerabilities and demonstrate what an attacker could potentially achieve if they were able to circumvent any security measures.

The battle between attackers and defenders will never be truly over. Organisations will need to keep pace against these ever-developing threats and by adopting a combined approach to security they put themselves in the best possible position to do so.

Looking for more than just a test provider?

Get in touch with our team and find out how our tailored services can provide you with the cybersecurity confidence you need.