AI is going to transform IT Security Operations. The new stage of AI IT Security Ops evolution is here because we can finally use the massive amount of data that AI has given us. But there are some challenges with the current implementations and some changes in how we deliver IT security services to the world. This blog article details my predictions on how AI may develop to bolster our abilities.
The following diagram illustrates some IT security areas where AI can be used as is expanding into multiple disciplines.
The Beginning of the Journey
1.1 We are Here
The current state of AI looks like a melting pot of potential and realized benefits achieve with AI. A lot of the work is still in the experimentation stage, and this includes defining really what the term AI means. How does it relate to ML? And why, now, is AI exploding? All the appropriate questions have yet to be discovered and the answers are still questionable.
1.2 Sandbox
I created a stage call the Sandbox. We have to prove that AI helps us reliably. Remember the adage of “trust but verify”. This is still required, even if AI which can sound very authoritative even when it is making incorrect statements. We have all heard of the recent errors of incorrect arrests and legitimate operating system files being tagged as malware. This is the time to discover, test and validate the right use cases.
1.3 Operationalization
This is where the IT Security teams started developing the foundational SOPs (standard operating procedures) to on-board, monitor and leverage the data sources provided through the AI interfaces. At this stage, we will we develop the “seeds of trust” with some amazing examples and some lessons learned from our initial mistakes.
1.4 Performance
Perhaps I am cheating here and expanding the scope from the goal of envisioning things, to adding something from my wish list. It is going to be very important to start building the mechanisms to measure the performance, the effectiveness, and the quality of the work performed by your AI. Based on my experience, you should start implementing basic mechanisms to measure. In the long term, it will help a lot.
1.5 Trusting AI Driven Data:
AI systems are only as good as the data they are trained on. Poor quality or biased data can lead to inaccurate predictions and false positives, which can undermine the effectiveness of threat hunting efforts. IT security teams have had years of experience in leverage balanced scoring mechanisms when receiving advice (think threat intel) to determine if an alert has greater confidence/reliability, should be acted on. Is file hash indicative of an infiltration? Is this the dropper, or an expected executable on a server? Do other IOCs confirm or cause you to question that validity of an alert? And then comes the challenges (and the cost) of identifying, subscribing, and storing these data in your data lakes (oceans). Vector databases are the right approach, but they are sufficiently different from our traditional structured data sources that we need to rethink how we consume and access this data. And don’t forget about enforcing your data retention policies.
The Cross Leverage Stage
This is where we start really integrating AI into our daily security operations work. At this stage, we regularly use AI, measure its performance, and trust it to provide the expected outcomes. Integrating AI into our SOPs (standard operating procedures) is not an overnight thing. It requires discussion, training, and testing (AI will be great for tabletop exercises).
This is where the focused leverage of AI in our different IT security operations disciplines. I have initially split the journey into two key focus areas:
- Operations
- Threat Hunting
Positioning your strategy to leverage AI mapping to reactive (Operations) and proactive (Threat Hunting).
Operations
Let’s look at the Operations perspective.
I see the journey to complete AI integration with IT Security Operations, encountering two stops.
2.1 Suggestive
This is where we are still learning to put the guard rails in place to let us trust AI. We will use AI to highlight operational discrepancies, using more than our traditional IT security data sources (logs, security tool alerts). I hope we will embrace additional data sources like business events, media news, weather predictions. Yes, I am talking about the ingestion op sec into IT. This will allow IT security to leverage powerful tools to obtain near-real-time transcriptions of videos and voice.
2.2 Trusted Response
We will then move to the next level, Trusted Response. This could get very interesting. Imagine removing the challenges of all the IT security teams using different security tools, and normalizing to a point where an AI machine could be trusted enough to take action to mitigate, reduce the impact or give the IT security operations team more time to determine the final remediation strategy. Imagine Mitre providing not only remediation playbook on “paper” but also providing something that could be imported into your AI server to automate that response. Scary but with the right guard rails, we could start trusting AI.
Threat Hunting
In this area of IT security, AI could help the analyst find those anomalous behaviors. Not just IT indicators, but also combining business transactions, events, and physical actions. The need to develop shared data lakes is going to become critical. We have already ML capabilities in a wide range of existing tools, but they have tended to become siloed spaces within particular groups. AI could integrate them to create a new, more powerful threat hunting capability.
3.1 Reactive Threat Hunting Support
I see this phase as when we are growing augmented threat hunting. The incident response team, the IT security data scientist, the analyst will use AI to confirm, validate, and learn to identify threats to an organization. If you have used any version of those AI chat bots, you will know it takes time to ask the right way to get the right answer. And even then, we need to validate the results. It’s only through using the tools and developing a consistent mental muscle memory, that we will understand how to use AI the right way.
3.2 Proactive Highlighting
Similar to the growth of private LLMs and GPTs, IT security teams will build databases with all data sources and re-enforcement learning. Developing these multi-dimensional data lakes will also allow us to ask that question, “What strange things are occurring in my business?”. Notice I referred to the business, not the network, the servers, or users. This is because I could foresee a major shift in the expectations on what the definition of Threat Hunting is at the business level. This could become a battlefield as different groups inside an organization vie or try to prove who has the best discipline to be trusted with protecting the business, not just the business’ “crown jewels”. Is it the business strategist, marketing, audit, operations? I think we need to acknowledge that each group has a different focus and set of capabilities and let all the groups alert on their focus areas, but (this is really important) they should share data and their findings.
Compliance
The area of compliance will come up inevitably come up.
4.1 Proving AI
Initially, because of the uncertainty around AI, the initial wave will be validating what data is “pooled” into the data lake(s) and how the data is leveraged inside IT Security Operations to audit. This is not a new thing for IT security, but we will need to embrace the need to have the controls and measures in place to support an audit in a new area.
4.2 Audit Support
This is where things get interesting, where we use AI to prove that we are protecting PII, company secrets, and company resources. We will use AI to generate reports to show our response timelines, removing the burden of generating root case analysis reports.
4.3 Audit Approved
AI will become a pre-requisite for certain audits. I remember a situation where we were required to document the IT security tools that we used to prove that we are protecting the data and the business processes. It was impactful since for the first time, IT security was seen as a business value, as opposed to just some procedural insurance. I believe just like the early days, where we needed to confirm that we are using antivirus on the endpoint, we will be required to show how AI is being used to protect everything.
5 Reporting
The final area is reporting. For the success of the IT Security teams, external communications has always been critical. The times when I have been brought in to fix a security program or team, lack of communications has been a consistent gap. For AI, reporting is going to be critical to set people’s expectations on what AI can achieve for the organization and to also justify the transition of investment or additional investment (it needs data and processing power).
As with everything, it’s a journey not an overnight change.
5.1 Security Program Reporting
Initially, the reporting will focus on the activities of the IT security team. Performance and the leverage of AI to improve response will be critical in proving/validating the use of AI.
5.2 Executive
Other parts of the organization are going to also be moving more and more into the world of AI augmentation. As I stressed above, it is going to be very important to ensure that data is shared, not siloed. Once all the minimum data pieces are in place, AI can be used to generate business level reporting, showing how the company is responding to business and IT security threats.
5.3 CAiM — A Future Role
I think we might see the role of an executive called something like the Chief AI Manager since the scope of managing the effectiveness, efficiency, and use of AI across the whole organization since this is not something that can be hidden within a department, especially with the date spread.
Timetable
Now you might be asking how long it is going to take to travel to all these destinations. To be honest, I don’t know. I think this is going to be a 3- or 4-year journey before we see the signs that the paths are tracking root. We will see signs and this journey is based on seeing how other emerging technologies have evolved.
7 Tips to Improve the Leverage of AI in IT Security Operations
1. Invest in Data Quality: Prioritize the collection and use of high-quality, unbiased data. This will improve the accuracy of AI predictions and reduce the likelihood of false positives.
2. Enhance Transparency: Work towards making AI systems more transparent and understandable. This can be achieved through techniques like explainable AI (XAI), which can help security teams better understand and trust AI outputs.
3. Gradual Integration: Instead of a complete overhaul, gradually integrate AI into existing operations. This can help ease staff into the transition and reduce resistance.
4. Address Legal and Ethical Issues: Develop clear policies and guidelines to address the legal and ethical issues associated with AI use. This includes ensuring privacy protections and establishing safeguards against misuse.
5. Balance AI and Human Oversight: While AI can greatly enhance threat hunting, it should not replace human oversight. Maintain a balance between AI and human involvement to ensure that decisions are not solely reliant on AI.