Artificial intelligence (AI) has been transforming the cybersecurity landscape for over a decade, with machine learning (ML) speeding the detection of threats and identifying anomalous user and entity behaviors. However, recent developments in large language models (LLMs), such as OpenAI’s GPT-3, have brought AI to the forefront of the cybersecurity community. These models use documented cybersecurity information to learn how to respond to prompts on the topic. LLMs can also explain complex security issues in easy-to-understand language, bringing the non-expert into the world of cybersecurity.
While LLMs are not a silver bullet for cybersecurity, they can quickly detect and mitigate cyberattacks at scale. Unfortunately, as with all advancements in the cybersecurity world, bad actors are using LLMs to increase the breadth and speed of their attacks with some early success.
One of the significant challenges in leveraging AI for cybersecurity is building trust. Trust is everything in security, and for years, vendors have played “fast and loose” with “AI/ML”, often overstating their capabilities to drive increased interest in their offerings. This practice has caused many cybersecurity decision-makers to be skeptical of any technology touting AI/ML capabilities. Additionally, accuracy and explainability are two significant challenges regarding AI/ML. The data used to train AI/ML models drives the output of the models. If the training data does not represent the “real world’, the model will develop a bias that can skew its ability to deliver expected results. Some data, such as threat intel, good and bad file characteristics, indicators of compromise (IOCs), and the like, are to everyone. However, user and entity behavior data only applies to the specific user or entity.
Another significant challenge is data security. Defining and controlling what training data can be shared and what data stays with organizations is essential. In the wrong hands, this data could assist bad actors in their attacks to subvert AI/ML’s ability to identify their files, applications, and behaviors as nefarious. As a result, governments and commercial entities need to build regulations, standards, and best practices to thwart new threats.
For example, Extended Detection and Response (XDR) products enable non-expert users to deliver outcomes once thought only for senior security personnel. Non-experts can complete comprehensive investigations and responses without writing complex queries or developing scripts. As a result, we can see the current talent gap between the supply and demand of security professionals.
Recent AI developments will speed up the automation process, making detection and response faster and more effective. However, while data collection, normalization, detection, and correlation automation are possible, complex bespoke attacks require professional security expert involvement. In addition, attackers exploit human vectors frequently, as seen in high-profile attacks such as SolarWinds and the Colonial Pipeline attack. While it is impossible to eliminate the potential of a user inadvertently becoming part of a cyberattack, continual technology advancement coupled with the availability of MDR/MSSP services makes it possible to continually reduce the likelihood a user’s actions, whether intentional or accidental, leads to a widespread breach.
Regarding progress indicators for AI in cybersecurity, security posture vs. security budget is the ultimate test. Does AI deliver better outcomes that are cheaper/faster than the alternative? Enterprise security teams’ represent AI impact in actual performance metric changes, such as mean time to detect and respond (MTTD and MTTR, respectively). MSSPs have the best opportunity to articulate the impact of AI on their bottom line, positively or negatively. Since they deliver services to drive revenue, they should see the tangible financial implications after adopting AI-driven cybersecurity solutions. There are no magic bullets in the cybersecurity world. Security vendors that promote any technology as 100% effective or claim the ability to prevent and detect all breaches should be derided by the community as they display their misunderstanding of cybersecurity for all to see. That said, recent developments in LLMs and other AI technologies can impact the speed and ease with which threats are detected and mitigated. The cybersecurity community must have trust, accuracy, and accountability to embrace AI’s full potential. Additionally, there will always be complex attacks that require human involvement, and progress indicators should focus on metrics such as security posture vs. security budget and SOC automation. AI can help us maintain a more secure digital world by addressing these challenges and tracking progress.