TL;DR: AI's role in cybersecurity is complex, offering both significant advantages and serious risks. It enhances threat detection, incident response, and predictive analytics, making it a valuable asset for security teams. However, AI also empowers cybercriminals with advanced tools for social engineering, automated attacks, and sophisticated malware. Internal threats, like data leakage and rogue virtual agents, further complicate the landscape. To protect against these challenges, organizations must establish strong governance policies, train employees on AI risks, and implement AI-driven security solutions. Balancing these measures ensures AI remains an asset rather than a liability in the fight against cyber threats.
"I asked my AI to write a firewall, and it came back with a haiku. I'm not sure how effective that will be against ransomware." - Some unfunny IT guy
Artificial Intelligence (AI) has revolutionized countless industries, but its impact on cybersecurity is particularly complex. While AI offers immense potential to bolster defenses, it's also being weaponized by cybercriminals. This begs the great question, is AI a cybersecurity best friend or a worst enemy? I can confidently say the people that ignore AI within Cybersecurity are the ones that are at the greatest risk. To figure this out, let's take a look at some external and internal threats, ways to protect against them, and then polish up with some of the uses for AI that assist cybersecurity teams to help us find an answer.
External Threats
Hyper-realistic communications (email, SMS, calls, video chats, etc) with customized intent that is relevant to the recipient or deepfakes impersonating executives to authorize fraudulent transactions. These are becoming much more believable to the common user.
AI-powered malware can evade detection, self-mutate, and spread rapidly. Typical malware detectors watch for specific signatures or outbound communication to known malicious IPs, these can lay dormant and monitor silently while understanding the network then determine when to spread for maximum impact.
AI can analyze vast amounts of data to identify vulnerabilities and tailor attacks to specific individuals or organizations. It can pull the latest vulnerability data, perform due diligence on an organization to gather necessary information, and automate attacks with continued attempts to escalate much quicker than humans can perform and someone doesn't have to be experienced to use.
Internal Threats
Employees can easily enter confidential information in tools that help them be more efficient like ChatGPT which is used to further train the large language model where someone else can see the information.
Companies implementing virtual agents and automated chatbots are liable for the information that is presented to end users. This could be confidential or personally identifiable causing fines or loss of intellectual property; or commitments like selling cars for $1.
That's a lot right? You're probably asking yourself, how do we defend such attacks. The external ones are obviously malicious but the internal threats include malicious and non-malicious actors. The problem with using AI in the business is that humans and machines can unknowingly share confidential data. This can either be due to ignorance on how AI models are trained (users and coders alike) and social engineering.
To protect your organization, consider these strategies:
I like the way IBM describes this from a high level. You can break down the areas into the data that powers the AI, the backend model itself (engine), and the users or machines that use it. There are things that blend across these as well like the Infrastructure where everything is stored and used and the policies that surround everything (governance).
Governance: Establish policies around AI tools, which ones can be used (ChatGPT, Google Gemini, Anthropic, etc), how will they be monitored, and what kind of inputs or questions can be asked. This establishes guidelines, accountability, and should be continuously audited to minimize vulnerabilities.
Training: Educate your workforce about the risks of deepfakes, social engineering attacks, and the AI Governance policies. Its no secret that most breaches (>80% and growing) happen due to social engineering rather than externally hacking the network.
Robust cybersecurity infrastructure:
Advanced Networking - Firewalls and Intrusion Detection/Prevention Systems are the front end of the network and are used to mitigate attacks and block before they enter. If you are managing this internally, make sure you keep up to date on patches, signatures, and log monitoring. If you are co-managing with a managed service provider, do you due diligence to ensure they actually know what they are doing. I would be hesitant if they only mention they only update on "patch Tuesday" and monitor less than 24x7. This protects the access to the data, model and usage to ensure it is private to only who needs it.
Endpoint protection - Watch for malware and zero-day attacks on every endpoint; computers, servers, etc. Anti-Malware tools detect based on signatures downloaded from a database that were previously recognized; zero-day attacks have yet to be seen hence the name "zero-day". Endpoint Detection and Response software establishes what a typical machine behavior should be and notifies on anomalies. This is how you catch the new stuff! This protects the servers where the model is located, any data stores for the backend data and the endpoints where the users access the AI tools.
Data Classification - Besides controlling who can access your data, you also need to determine what level of data they can access and what can be shared. The Data Classification process recognizes where the data is located, establishing labels depending on how much impact they have on the business if shared, then controlling who needs access to what data and what can be shared externally. This is also important for leveraging AI tools like chatbots or virtual agents and controlling what data they share to the users who access them.
AI Tool Monitoring - Do you have any idea what tools your employees are using? How can you tell? AI monitoring tools can detect websites and applications like ChatGPT, Google Gemini, etc and report which users are accessing them. If you provide access to the tool as the corporate governance policy, then you can also report on what they are typing in as well as the results. This obviously gives you the most telemetry and protection.
AI-powered security solutions: Implement AI-driven tools to enhance threat detection and response capabilities. Fight automation with automation and assist the cybersecurity team with AI driven playbooks and issue tracking. We will look more into this shortly.
How does AI Assist?
AI can analyze network traffic and endpoint for anomalies, identifying potential threats before they escalate. By only notifying on anomalies, this minimizes the threat of alert fatigue where security teams get too many notifications; this limits to what's potentially important. AI can also be taught to detect AI based signatures and attacks.
AI can automate routine tasks, allowing security teams to focus on critical issues and respond faster. It can also assist Security Operations teams with threat identification, tracking, categorization, logging, and suggested next steps. This greatly accelerates the ability for a team to respond as well as decrease the time to train new employees if they have AI assisted recommendations real-time at their fingertips.
By analyzing historical data, AI can predict potential attacks, enabling proactive measures. This is done through consolidated threat intelligence by aggregating this data into a central repository and allowing the AI to find correlations and predict what threat behaviors look like for ones not seen yet. This can also dummy down the explanation of the threats to non-security understanding so the every day person can understand and generate the necessary communications for compliance forms, communication plans, and executive reviews.
As you can see, the integration of AI into cybersecurity is a double-edged sword, presenting both significant opportunities and formidable challenges. While AI empowers security teams with advanced tools for threat detection, incident response, and threat intelligence, it also equips cybercriminals with sophisticated methods to launch more targeted and elusive attacks. The distinction between friend and foe is therefore not clear-cut, but rather depends on how AI is managed and leveraged within an organization. By implementing robust governance policies, providing continuous training, and utilizing AI-driven security solutions, businesses can harness the power of AI to enhance their defenses while mitigating the risks associated with its misuse. The key lies in balancing innovation with vigilance, ensuring that AI serves as a powerful ally in the ongoing battle against cyber threats. So I leave it to you, is AI more good or bad in Cybersecurity?