Security Management 2.0: Platform Evaluation, Part 1By Adrian Lane
To understand the importance of picking a platform, as opposed to a product, when discussing Security Management 2.0, let’s draw a quick contrast between what we see when talking to customers of either Log Management or SIEM. Most of the Log Management customers we speak with are relatively happy with their products. They chose a log-centric offering based on limited use cases – typically compliance-driven and requiring only basic log collection and reporting. These products keep day-to-day management overhead low, and if they support the occasional forensic audit customers are generally happy. Log Management is an important – albeit basic – business tool. Think of it like buying a can opener – it needs to perform a basic function and should always perform as expected. Customers don’t want their can opener to sharpen knives, tell time, or let the cat out – they just want to open cans. It’s not that hard. Log Management benefits from its functional simplicity – and even more from relatively modest expectations.
Contrast that against conversations we have with SIEM customers. They have been at it for 5 years (maybe more), and as a result the scopes of their installation are massive – in terms of both infrastructure and investment. They grumble about the massive growth in event collection driven by all these new devices. They need to collect nearly every event type, and often believe they need real-time response. The product had better be fast and provide detailed forensic audits. They depend on the compliance reports for their non-technical audience, along with detailed operational reports for IT. SIEM customers have a daily yin vs. yang battle between automation and generic results; between efficiency and speed; between easy and useful. It’s like a can opener attached to an entire machine shop, so everything is a lot more complicated. You can open a can, but first you have to fabricate it from sheet metal.
We use this analogy because it’s important to understand that there are a lot of moving parts in security management, and setting appropriate expectations is probably more important than any specific technical feature or function. So your evaluation of whether to move to a new platform needs to stay laser focused on the core requirements to be successful. In fact, the key to the entire decision-making process is understanding your requirements as we outlined in the last post. We keep harping on this because it’s the single biggest determinant of the success of your project.
When it comes to evaluating your current platform, you need to think about the issue from two perspectives, so we will break this discussion into two posts. First is the formal evaluation of how well your platform addresses your current and foreseeable requirements. This is necessary to quantify both critical features you depend on, as well as to identify significant deficiencies. A side benefit is that you will be much better informed if you do decide to look for a replacement. Second, we will look at some of the evolving use cases and the impact of newer platforms on operations and deployment – both good and bad. Just because another vendor offers more features and performance does not mean it’s worth replacing your SIEM. The grass is not always greener on the other side. The former is critical for the decision process later in this series; the latter is critical for understanding the ramifications of replacement.
The first step in the evaluation process is to use the catalog of requirements you have built to critically assess how the current SIEM platform achieves your needs. This means spelling out each business function, how critical it is, and whether the current platform gets it done. You’ll need to discuss these questions with stakeholders from operations, security, compliance, and any other organizations that participate in the management of SIEM or take advantage of it. You cannot make this decision in a vacuum, and lining up support early in the process will pay dividends later on. Trust us on that one.
Operations will be the best judge of whether the platform is easy to maintain and how straightforward it is to implement new policies. Security will have the best understanding of whether forensic auditing is adequate, and compliance teams are the best source of information on suitability of reports for preparing for an audit. Each audience provides a unique perspective on the criticality of the function, and the effectiveness of the current platform.
In some cases, you will find that the incumbent platform flat-out does not fill a requirement – that makes the analysis pretty easy. In other cases the system works perfectly, but is a nightmare in terms of maintenance and care & feeding for any system or rule changes. In most cases you will find that performance is less than ideal, but it’s not clear what that really means, because the system could always be faster when investigating a possible breach. It may turn out the SIEM functions as desired, but simply lacks capacity to keep up with all the events you need to collect, or takes too long to generate actionable reports. Act like a detective, collecting these tidbits of information, no matter how small, to build the story of the existing SIEM platform in your environment. This information will come into play later when you weigh options, and we recommend using a format that makes it easy to compare and contrast issues. We offer the following table as an example of one method of tracking requirements, based on minimum attributes you should consider.
Security, compliance, management, integration, reporting, analysis, performance, scalability, correlation, and forensic analysis are all areas you need to evaluate in terms of your revised requirements. Prioritization of existing and desired features helps streamline the analysis. We reiterate the importance of staying focused on critical items to avoid “shiny object syndrome” driving you to select the pretty new thing, perhaps ignoring a cheap dull old saw that gets the work done.
As we mentioned, evaluating your current platform against your updated requirements is only half the process. In our next post we’ll dig into the evolving use cases that are likely not being met by your current solution. Or perhaps met, but poorly. This is where assessing your incumbent gets tricky – new technologies can solve the problems identified as new requirements, but may cause unintended ripples. For example, if an issue with the current platform is limited scalability, you need to verify that the new proposed deployment model will work within your environment before deciding to move.