Our motivation for launching the Security Management 2.0 research project lies in the general dissatisfaction with SIEM implementations – which in some cases have not delivered the expected value. The issues typically result from failure to scale, poor ease of use, excessive effort for care and feeding, or just customer execution failure. Granted some of the discontent is clearly navel-gazing – parsing and analyzing log files as part of your daily job is boring, mundane, and error-prone work you’d rather not do. But dissatisfaction with SIEM is largely legitimate and has gotten worse, as system load has grown and systems have been subjected to additional security requirements, driven by new and creative attack vectors. This all spotlights the fragility and poor architectural choices of some SIEM and Log Management platforms, especially early movers. Given that companies need to collect more – not less – data, review and management just get harder. Exponentially harder.

This post is not to focus on user complaints – that doesn’t help solve problems. Instead let’s focus on the changes in SIEM platforms driving users to revisit their platform decisions. There are over 20 SIEM and Log Management vendors in the market, most of which have been at it for 5-10 years. Each vendor has evolved its products (and services) to meet customer requirements, as well as provide some degree of differentiation against the competition. We have seen new system architectures to maximize performance, increase scalability, leverage hybrid deployments, and broaden collection via CEF and universal collection format support. Usability enhancements include capabilities for data manipulation; addition of contextual data via log/event enrichment; as well as more powerful tools for management, reporting, and visualization. Data analysis enhancements include expanding supported data types to include dozens of variants for monitoring, correlating/alerting, and reporting on change controls; configuration, application, and threat data; content analysis (poor man’s DLP) and user activity monitoring.

With literally hundreds of new features to comb through, it’s important to recognize that not all innovation is valuable to you, and you should keep irrelevancies out of your analysis of benefits of moving to a new platform. Just because the shiny new object has lots of bells and whistles doesn’t mean they are relevant to your decision. Our research shows the most impactful enhancements have been the enhancements in scalability, along with reduced storage and management costs. Specific examples include mesh deployment models – where each device provides full logging and SIEM functionality – moving real-time processing closer to the event sources. As we described in Understanding and Selecting SIEM/Log Management: the right architecture can deliver the trifecta of fast analysis, comprehensive collection/normalization/correlation of events, and single-point administration – but this requires a significant overhaul of early SIEM architectures. Every vendor meets the basic collection and management requirements, but only a few platforms do well at modern scale and scope.

These architectural changes to enhance scalability and extend data types are seriously disruptive for vendors – they typically require a proverbial “brain transplant”: an extensive rebuild of the underlying data model and architecture. But the cost in time, manpower, and disrupted reliability was too high for some early market leaders – as a result some instead opted instead to innovate with sexy new bells and whistles which were easier and faster to develop and show off, but left them behind the rest of the market on real functionality. This is why we all too often see a web console, some additional data sources (such as identity and database activity data) and a plethora of quasi-useful feature enhancements tacked onto a limited scalability centralized server: that option cost less with less vendor risk. It sounds trite, but it is easy to be distracted from the most important SIEM advancements – those that deliver on the core values of analysis and management at scale.

Speaking of scalability issues, coinciding with the increased acceptance (and adoption) of managed security services, we are seeing many organizations look at outsourcing their SIEM. Given the increased scaling requirements of today’s security management platforms, making compute and storage more of a service provider’s problem is very attractive to some organizations. Combined with the commoditization of simple network security event analysis, this has made outsourcing SIEM all the more attractive. Moving to a managed SIEM service also allows customers to save face by addressing the shortcomings of their current product without needing to acknowledge a failed investment. In this model, the customer defines the reports and security controls and the service provider deploys and manages SIEM functions.

Of course, there are limitations to some managed SIEM offerings, so it all gets back to what problem you are trying to solve with your SIEM and/or Log Management deployment. To make things even more complicated, we also see hybrid architectures in early use, where a service provider does the fairly straightforward network (and server) event log analysis/correlation/reporting, while an in-house security management platform handles higher level analysis (identity, database, application logs, etc.) and deeper forensic analysis. We’ll discuss these architectures in more depth later in this series.

But this Security Management 2.0 process must start with the problem(s) you need to solve. Next we’ll talk about how to revisit your security management requirements, ensuring that you take a fresh look at the issues to make the right decision for your organization moving forward.

Share: