One of the most stressful jobs in the cybersecurity sector has to be that of the security analyst. Tasked with monitoring the alerts generated by tooling in the Security Operations Centre (SOC), it’s not unusual for an individual analyst to have to triage, investigate and escalate in excess of 100 alerts during a 12 hour shift as well as communicate with customers, write lengthy reports and threat models and create threat advisory releases.
It’s an unsustainable way of working that sees the analyst lurch between rapid decision making and deep problem solving, resulting in burnout. Every time an alert is generated, the analyst has to react, triggering a stress response. It’s like cold-starting an engine a hundred times a day and inevitably leads to the analyst becoming desensitised. Coping mechanisms kick-in as the brain seeks to find ways to protect itself, and without sufficient positive feedback the analyst becomes demoralised and demotivated, so that even when a genuine alert does happen, they struggle make the right call and respond appropriately.
A guilty secret
Alert fatigue is a recognised problem but it remains a guilty secret. Analysts believe they should be able to cope and feel unable to ask for help, unless anonymously on online cybersecurity forums. The job is not the one they signed up for, which was to hunt for threats and refine and hone their skills, and even moving jobs will not resolve the issue because they know the role will be the same even under a different employer.
A recent report by SoSafe found 68% of cybersecurity professionals across Europe are facing burnout, potentially leaving businesses at risk of attack, with the UK topping the rankings for its high stress levels. What’s more, these burnout levels are desensitising analysts, leading to an increase in risk. It’s an issue attackers are now proving quick to exploit, with the same report revealing that the combination of factors such as the high-pressure environment (33%), long hours (29%), excessive workloads (28%) and constant firefighting (25%) are laying security departments open to attack.
SOC teams work ticket by ticket, and while they struggle with alerts, the sophistication of attacks increase so threats don’t trigger any alerts. Techniques such as Living off the Land (LotL) attacks that utilise the resources of the operating systems and leave no real trace are notoriously hard to detect. In addition, Generative AI is being used to write malware faster while malware is being increasingly commoditized. All this is rendering classic detection useless, preventing the analyst from seeing the entire picture across tooling and alerts, missing the opportunity to uncover whether an attack is progressing.
Why the problem is getting more urgent
Solving these problems and destressing the role is now becoming critical because many of these skilled cybersecurity professionals are voting with their feet and leaving the profession. According to a SOAX report, 64% of cybersecurity staff planned to leave their jobs this year, deepening the skills gap that already exists (almost 350,000 workers are needed to plug the gap in the UK, according to the 2024 ISC2 Cybersecurity Workforce Study).
The sector is well known for turning to technology to help automate processes and boost efficiency but this is now contributing to burnout. This is because having multiple solutions typically leads to what’s termed ‘swivel chair operations’ whereby the analyst has to turn from one system to another to obtain information. In fact, the tech stack is now becoming so bloated that it’s not uncommon to find between 10-15 vendors solutions and 60-70 tools being used in the enterprise. We need to think smart about how we can leverage existing technology and utilise emerging technology to assist the analyst.
Using AI to deal with alerts
If a sequence of events detected is correlated with threat intelligence and fed through AI it can be used to determine what we are seeing, how far the attack has progressed, and to present the analyst with some next steps for consideration to progress the investigation. AI is used to qualify the threat by asking simple questions on top of a sequence of detections to describe what is happening and prescribe what should be done about it, instead of serving up every single alert to the analyst.
Essentially, AI presents a chain of events to the analyst only at the point when it becomes statistically unusual and warrants human eyeballs. In this way, AI will augment the analyst and in doing so significantly reduce stress levels and improve mental well-being by freeing them up to spend time on the things that they find meaningful rather than having to qualify alerts.
Looking to the future, the expectation is that AI will become an essential part of the SOC and it could even help resolve some of the skills shortages. This is because it could effectively level-up novice analysts by supplementing their skillsets, enabling them to work alongside more experienced analysts straight away. Such practices will then allow both human and machine to do what they do best and will hopefully consign alert fatigue to the history books.
Christian Have
Christian Have is CTO at Logpoint, a European cybersecurity company specialising in threat detection, incident response, and compliance solutions for mid-market organizations and MSSPs. With a background in network security, Christian has held roles as a hospital Security Specialist and Head of Network Security for the Danish National Police.