Security Information and Events Management (SIEM) services are a key part of any modern business. Computer systems are talkative things, with various apps and devices exchanging thousands of bits of information every day, and most of them storing logs detailing their activities. Most of this information is routine and doesn’t need any oversight, but when something goes wrong, such as a security breach in an organisation, log files can be crucial to understanding an incident and deciding how to respond. Historically, SIEM solutions are services which collect, organise and manage all the log files produced by an organisation’s computer system in a single place. Next-Gen SIEM services provide analytics, distilling the information from logs into a few simple indicators which can be presented to users to help them get an overview of a system. Using Next-Gen SIEM, a company can enhance the visibility of its whole network, enabling faster detection of incidents and comprehensive security compliance audits.

Since SIEM systems sort through and analyse huge quantities of data, it is no surprise that they are being rapidly transformed by the recent growth of big data analytics and cloud computing. These technologies enable Next-Gen SIEM software to perform ever more complex tasks, including more sophisticated incident detection and even automated response. This latter development is beginning to blur the line between SIEM and other types of cybersecurity software, such as universal threat management (UTM) packages, which focus explicitly on containing threats rather than analytics.

Cyber Citadel’s founder, Jonathan Sharrock, and Alastair Miller, Principal Cyber Security Consultant at Spark New Zealand, sat down with Simon Howe, sales manager at Next-Gen SIEM provider LogRhythm, to discuss how SIEM is evolving and what we can expect from this technology in the future.

Robust Security Information and Events Management (SIEM) ensures centralised cyber security
Robust Security Information and Events Management (SIEM) ensures centralised cyber security. (Copyright: Cyber Citadel).

How SIEM is evolving and what we can expect from this technology in the future  

Jonathan Sharrock: I see a lot of people these days who have the opinion that SIEM is evolving into something else, with UTM and other features coming into play. How do you think it has evolved and where is it going?

Simon Howe: That is certainly something we are talking about. LogRhythm has been involved in SIEM for about 18 years, and it’s a very different technology than what it used to be. Two things have evolved. First up, the threat landscape itself has changed and therefore how we respond and what we need to do to address it has changed and this includes the capability of platforms. I think the fundamental difference now is the strength and sophistication of analytics. In the early days, SIEM was all about a single pane of glass, bringing information from systems together and giving users a single point of reference. It wasn’t so much to the same extent about using advanced analytics. What has changed now is the extent to which analytics are playing an important role. It’s one thing to have log management and retention but by applying machine analytics we can start to think about entirely new features, like real-time monitoring and alerting.

Alastair Miller: From implementations I’m doing at the moment, I’ve seen that people do want to find things that they don’t yet have visibility of. But the most important feature now is actioning, pinpointing where and what an event is.

SH: We’ve certainly seen conversations with customers shift, from caring only about visibility and seeing what is going on now more to concerns of automation and orchestration. They want to know, now that we’ve identified a compromise or incident, what are we able to do, and using the technology itself, to drive through a solution? From an industry point of view, a key focus has been how do we give the user less to do by building the capacity of the platform itself. A good example of this is that in our platform when there is an incident, that might trigger a high-level alert indicating malware or ransomware behaviour. Rather than being just an alert for analysis, we can now also queue up a response, an automated countermeasure. Of course, you can run governance around that automated response, and implement approval levels for oversight. But the aim is to reduce both the time to detect and to respond to an incident.

JS: Are you talking about even the level of automatically quarantining a user infected with malware?

SH: Correct, that’s a good example of where we are heading. The response can be as simple as putting a user on a watchlist or stopping an unwanted service, or it can be a more elaborate response as you’re suggesting.

AM: That’s certainly a good user story if you can sell it. Great, I can see stuff in real time, but then what can I do about it if I don’t have the staff trained? Do I need a security operations center (SOC), and can I afford one?

SH: That is exactly the use case we’re focusing on. Now we’ve got a technology that can tell us horrible things that are happening, but that’s not even half the problem. It leaves you with the problem of what you’re going to do about it. Our technological focus is on how to help automate and orchestrate that response, focusing on taking as much work as we can off the analyst. This is manifested in the technology itself, with smart responses and sequences of responses to events. But we’re also building in playbooks, for example, giving analysts a clearer view of what has to be done, including triage and mediation steps. We’re assuming a company has limited resources to deal with a challenge, so we try to get as much effort out of the technology as possible. A lot of development focus is on the response window, automation and orchestration.

JS: Another thing I wanted to talk about is behaviour analysis, and a development I saw with software analysing a user in comparison to their peers. I’ve seen people in the industry going in for AI and machine learning a lot here, but it still seems like you get a problem with lots of false positives.

SH: In general, in the current threat landscape we are seeing a lot more behavioural or user-based threats. We’re seeing fewer sequential, repeated scenario-based attacks, so we need to develop a different type of analytics. Traditional SIEM is really all about scenarios: if we see log x and we see log y, we can assume z is happening. And that’s great and the more quickly, effectively and completely we can conduct scenario-based analytics, the better our detection capability. But we’re increasingly seeing more insider behaviour-based and zero-day attacks, and you can’t build a scenario around those because they haven’t happened previously, so that’s a very different type of analytics required. That’s where behavioural analytics come into play. This is where some buzzwords like AI and machine learning start to become important.  The focus now is on behavioural analytics which are able to surface threats not based on known scenarios but on behavioural anomalies. A lot of this is about comparing a user with their peers, understanding if their behaviour is different today to other days, or unusual compared to their peers, possibly across multiple organisations.

A lot of this is driven by the fact that we can now put a lot more computing power into it. For example, we send metadata from our SIEM solutions to a cloud instance, where we are able to apply a lot more compute in the cloud. This means we can look at a lot more data over long time, and that’s where you reduce false positives, looking at large datasets to give you more accuracy. A simple fact of IT at the moment is that we have more capacity, we can throw a lot more compute at the problem, and that’s why we’re seeing more advances. That’s allowing us to work more towards these behavioural analytics. There’s now less involvement from a user, and a lot of that analytics is where AI and machine learning are starting to be applied. The technology itself is asking questions and understanding data, so there’s much less need for customers to configure, build and understand the whole system themselves. There is a degree of supervised learning, where a user can help inform the analytics by, for example, asking a user to identify whether a particular behavioural anomaly is dangerous or not. That can help tweak analytics and improve accuracy, but the real focus is on unsupervised learning. The idea is to require less effort, less configuration, and less user touch.

AM: One of the key questions is business value, and that level of automation and results provides real business value, you’re essentially getting a virtual person.

JS: So, if you’re thinking about building and deploying a SIEM, I know you need a lot of initial time and effort to clean up logs, but when do you start seeing benefits? Are we talking days, months or years? Is it something you see early in some products and later in others?

SH: Great question! Definitely today technologies are more advanced and easier to implement so it does take less time to get value out of them. That’s a general reality of technology being smarter. Some of the pain that large deployments of legacy solutions have seen would have left some poor analysts crying in a corner. They were difficult, they took months to build, but that sort of deployment is fading out. When we talk to customers about deployment, we’re deploying software as an appliance or virtual service in a 3-5 day cycle, with a month being the typical timeframe to get value from the initial use case. But you’re not done from there, you then continue to build more capability and use cases to get more value out of the system.

The question we would typically put to a customer as they are trying to understand how hard the deployment will be, including how many people to dedicate to it, is first to define what value they are trying to get out of it and identify key use cases. We then go in and meet initial requirements, which are often compliance related with particular standards like PCI (Payment Card Industry) or ISO 27000. We help a customer understand how to meet the initial requirements, define the value expected, then we find we can be much more effective demonstrating that value.

JS: That leads me on to a question about compliance in general, which I think is a quite a big and interesting part of what we do. This is becoming even more important with regulations such as NDB (Notifiable Data Breaches) in Australia, the privacy act in New Zealand and also GDPR (General Data Protection Regulation) for Europe featuring everywhere. Are you getting a lot of inquiries in that direction?

SH: Yeah, there’s a huge push in that direction. One or more of those compliance frameworks are in, if not at the heart of, most of the customer requirements we’re dealing with. There’s a number of compliance frameworks, sometimes internal but more often external, driving a need for log management, log analytics, visibility and a requirement to maintain audit and assurance around those frameworks. And the SIEM is the place to do that. So yes, those compliance capabilities and frameworks are being built into our platform, and there are many: NIST, ISO 27000, PCI, GDPR, you name it. These are all frameworks where there are known controls that you can have a SIEM monitor and report against, potentially saving a lot of time from an administration and reporting point of view, which can really improve the efficiency of auditing.

JS: Just to wrap up, what do you see happening in the next few years, what direction do you see everything going in?

SH: Well I think that we made a point earlier about the advances that have been made over time. We’ve advanced in terms of the sophistication of the analytics we can apply to the platform, and that continues to evolve. Because we’re also now, from a storage and compute perspective, able to throw more at this problem, we are moving towards even more sophisticated analytics. I think that trajectory is still very much up and to the right of the Gartner Magic Quadrant. I think as we’re increasingly able to expand and continue using cloud computing, that will give us an incredible capacity to work with data, which will only highlight the visibility we get from these platforms, which come more into focus if we’re able to do more with that data. And I think we’re increasingly going to refine the behavioural and heuristics analytics. In that area, where we’re talking about AI and machine learning, I think there’s going to be rapid acceleration, enabling far more sophisticated tools that are much easier to use for the customer.

– Jonathan Sharrock, Cyber Citadel