top of page

Is AI "helping" or "hurting" Cybersecurity?

Cybersecurity concept illustration

AI is revolutionizing the cybersecurity industry. There is not a single cybersecurity vendor that is not actively adding AI capabilities to their product. It holds much promise for cyber defenses and for resolving the acute staff and skill shortage that has plagued the cybersecurity industry for decades. The following graph shows the forecasted growth (in Billons of $s) of the "AI in cybersecurity" market:

The graph shows the forecasted growth (in Billions of $s) of the "AI in cybersecurity" market

However, the "good guys" are not the only ones in a frenzy to figure out how to use AI. So are the bad guys. AI is not magic; AI is a tool. There are all sorts of statistics out there, but even just from personal experience, our feeling here at AllTrue.ai is that it makes us 10x more productive. And just like it makes us more productive, it makes the attackers also more productive. Therefore, the question is whether the introduction of AI is making us more secure as an industry, making us less secure, or maybe it's neutral (because if defense gets 10x better but offense gets 10x better maybe at the end it's a wash).

The Benefits to Defenders:

There is absolutely no doubt that AI is helping cybersecurity organizations. The tools are getting smarter and better due to AI, automation of processes are accelerating and reducing human error, staff shortage is being addressed by letting AI do things that people had to do until now, and co-pilots are helping with skill/knowledge/experience gaps. As an example, a Microsoft study claims that co-pilot for security helps security analysts regardless of their expertise level, reporting a 44% increase in accuracy and a 26% increase in speed.

Ten area where AI is helping defenders include:

  1. ​Cost reduction - AI makes things cheaper through automation and cybersecurity is a war of attrition. If you free up budgets, you can invest more in better defenses.
     

  2. Reducing human error - automation and machine decision-making takes a burden off people and removes the need for human intervention which leads to reduced human error.
     

  3. Discovering unknown threats - AI provides a solution for mapping and preventing unknown attacks including vulnerabilities that have yet to be identified and patched.
     

  4. Data volumes - AI is the only way to deal with the constant and exponential growth of data volumes.
     

  5. Phishing detection and prevention - ML algorithmic techniques based on data make analysis more accurate.
     

  6. Vulnerability management - AI helps in analyzing thousands of new vulnerabilities that are constantly discovered as well as improve solutions based on UEBA that help combat vulnerabilities before they are officially reported and patched.
     

  7. Threat hunting - Traditional security defenses that rely on attack signatures and IOCs are being replaced with AI-based threat analytics.
     

  8. Improved incident response - AI assists in automating incident response processes, allowing for faster and more efficient mitigation of cyber threats. AI algorithms analyze and prioritize alerts, investigate security incidents, and suggest appropriate response actions to security teams.
     

  9. Malware detection and endpoint protection - AI identification is more robust than traditional techniques.
     

  10. Reduction in false positives and alert fatigue

The Benefits to Attackers:

There is also no doubt that AI is already increasing the volume and effectiveness of cyber attacks. For example, OpenAI tracked usage of their LLMs and detected various state affiliated adversaries using their APIs. Other in-the-wild examples include:

  • Attackers used AI to bypass Bitfinex’s biometric authentication system, which required users to verify their identity with their face and voice. They injected fake video streams into the verification process, fooling the system into thinking that they were the legitimate users. The hackers also used deepfake technology to create realistic facial images that matched the voice and behavior of the victims. They stole US$150 million worth of various digital assets.
     

  • A cyber espionage campaign dubbed Operation Diànxùn was uncovered by researchers from McAfee. The campaign used AI to create phishing emails that targeted telecommunications companies around the world. The emails used natural language generation to craft convincing messages that appeared to come from legitimate sources, such as job recruiters or industry experts. The emails contained malicious attachments or links that delivered malware to the victims’ devices.
     

  • A cryptocurrency platform was targeted by a voice-spoofing attack that used AI to impersonate the CFO’s voice and tricked an employee into transferring US$250,000 to a fraudulent account.

 

Attackers are enhancing the existing tactics and methods as well as finding new attack patterns and methods. For example, fraud becomes much easier since AI is being heavily used to synthesize voice that is identical to an individual and using it on unsuspecting family members. A three second voice sample is enough to train a model to sound like you so even a short voice mail greeting is enough to replicate your voice. Phishing is getting more sophisticated and harder to detect, making it easier for attackers to generate good phishing campaigns.

Ten categories of how AI is helping increase threats include:

  1. According to Interpol's financial fraud assessment, the most prevalent global trends are investment fraud, advance payment fraud, romance fraud and business email compromise. AI also makes deepfakes way easier to produce and use.
     

  2. AI improves capabilities of threat actors to meddle in elections and other political processes.
     

  3. AI provides a capability uplift in reconnaissance. AI's ability to summarize data at pace enables threat actors to identify high-value assets and targets for examination and exfiltration.
     

  4. AI provides capability uplift in social engineering. GenAI is already being used to enable convincing interactions with victims.
     

  5. AI provides a lower barrier to entry for cyber criminals. Therefore, there will likely be a growth in the number of groups creating less-sophisticated (yet effective) attacks. Social engineering and ransomware attacks are constantly on the rise.
     

  6. Threat actors are better able to analyze exfiltrated data faster and more effectively as well as use it to train AI models.
     

  7. AI increases the efficiency and effectiveness of coding by threat actors and assists with malware and exploit development.
     

  8. The time between release of a security update to fix a newly identified security vulnerability and threat actors exploiting unpatched software is already reducing and AI accelerates this trend.
     

  9. AI makes it easier for code to bypass CAPTCHAs and other human-recognition systems.
     

  10. Sophisticated state actors that have access to robust resources and data are getting even more sophisticated with AI tooling.

So - better or worse?

We do not have a crystal ball here at AllTrue.ai so we cannot predict whether AI will improve the state of cybersecurity or make it worse. However, our opinion is that in the long-term AI is a power for good and will improve the general safety and security of the world. We think that as cybersecurity tools, processes and defense staff start benefiting from AI, the improvements will be greater than the elevated threats. But we believe that this will take time. Therefore, we believe that the balance will look something like this (over time):

Do attackers succeed more given both sides use AI (over time)

It takes time to design, build and deploy products and it takes less time to refine an attack and modify it using AI tooling. Therefore, we think that in the short run the "attackers win" but in the long run the good guys gain the upper hand because of AI. Because we don't have a crystal ball, we don't know what the units of either axis are - it's just our qualitative opinion.

 

We also believe this graph is appropriate given the need to secure AI systems. We know that enterprise security organizations sometimes play catch-up with the business that adopts new technologies fast. This is certainly the case with AI. Organizations are already running a lot of AI systems and few companies have a mature and robust AI security program. This contributes to the graph above - at first AI systems adopted without a full set of security controls contribute to elevating the risk profile and the attack surface of the organization. But as cybersecurity organizations deploy AI security tools, that added risk is addressed and  we're back to having defenders gain more benefits from AI than the attackers do.

So, yeah, we're optimistic!

bottom of page