So far, as we have looked to apply threat intelligence to your security processes, we have focused on detection/security monitoring and investigation/incident response functions. Let’s jump backwards in the attack chain to take a look at how threat intelligence can be used in preventative controls within your environment.

By ‘preventative’ we mean any control that is in the flow, and can therefore prevent attacks. These include:

  1. Network Security Devices: These are typically firewalls (including next-generation models), and intrusion prevention systems. But you can also include devices such as web application firewalls, which operate at different levels in the stack but are inline and can thus block attacks.
  2. Content Security Devices/Services: Web and email filters can also function as preventative controls because they inspect traffic as it passes through and can enforce policies/block attacks.
  3. Endpoint Security Technologies: Protecting an endpoint is a broad category, and can include traditional endpoint protection (anti-malware) and new-fangled advanced endpoint protection technologies such as isolation and advanced heuristics. We described the current state of endpoint security in our Advanced Endpoint Protection paper, so check that out for detail on the technologies.

TI + Preventative Controls

Once again we consider how to apply TI through a process map. So we dust off the very complicated Network Security Operations process map from NSO Quant, simplify a bit, and add threat intelligence.

TI + Preventative Controls

Rule Management

The process starts with managing the rules that underlie the preventative controls. This includes attack signatures and the policies & rules that control attack response. The process trigger will probably be a service request (open this port for that customer, etc.), signature update, policy update, or threat intelligence alert (drop traffic from this set of botnet IPs). We will talk more about threat intel sources a bit later.

  1. Policy Review: Given the infinite variety of potential monitoring and blocking policies available on preventative controls, keeping the rules current is critical. Keep the severe performance hit (and false positive implications) of deploying too many policies in mind as you decide what policies to deploy.
  2. Define/Update/Document Rules: This next step involves defining the depth and breadth of the security policies, including the actions (block, alert, log, etc.) to take if an attack is detected – whether via rule violation, signature trigger, threat intelligence, or another method. Initial policy deployment should include a Q/A process to ensure no rules impair critical applications’ ability to communicate either internally or externally.
  3. Write/Acquire New Rules: Locate the signature, acquire it, and validate the integrity of the signature file(s). These days most signatures are downloaded, so this ensures the download completed properly. Perform an initial evaluation of each signature to determine whether it applies within your organization, what type of attack it detects, and whether it is relevant in your environment. This initial prioritization phase determines the nature of each new/updated signature, its relevance and general priority for your organization, and any possible workarounds.

Change Management

In this phase rule additions, changes, updates, and deletions are handled.

  1. Process Change Request: Based on the trigger within the Content Management process, a change to the preventative control(s) is requested. The change’s priority is based on the nature of the rule update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models are anticipated.
  2. Test and Approve: This step includes development of test criteria, performance of any required testing, analysis of results, and release approval of the signature/rule change once it meets your requirements. This is critical if you are looking to automate rules based on threat intelligence, as we will discuss later in the post. Changes may be implemented in log-only mode to observe their impact before committing to blocking mode in production (critical for threat intelligence-based rules). With an understanding of the impact of the change(s), the request is either approved or denied.
  3. Deploy: Prepare the target devices for deployment, deliver the change, and return them to normal operation. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption to production systems.
  4. Audit/Validate: Part of the full process of making the change is not only having the Operations team confirm it during the Deploy step, but also having another entity (internal or external, but not part of Ops) audit it to provide separation of duties. This involves validating the change to ensure the policies were properly updated and matching it against a specific request. This closes the loop and ensures there is a documentation trail for every change. Depending on how automated you want this process to be this step may not apply.
  5. Monitor Issues/Tune: The final step of the change management process involves a burn-in period when each rule change is scrutinized for unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. For threat intelligence-based dynamic rules false positives are the issue of most concern. The testing process in the Test and Approve step is intended to minimize these issues, but there are variances between test environments and production networks so we recommend a probationary period for each new or updated rule, just in case.

Automatic Deployment

The promise of applied threat intelligence is to have rules updated dynamically per intelligence gleaned from outside your organization. It adds a bit of credibility to “getting ahead of the threat”. You can never really get ‘ahead’ of the threat, but certainly can prepare before it hits you. But security professionals need to accustom themselves to updating rules from data.

We joke in conference talks about how security folks hate the idea of Skynet tweaking their defenses. There is still substantial resistance to updating access control rules on firewalls or IPS blocking actions without human intervention. But we expect this resistance to ebb as cloud computing continues to gain traction, including in enterprise environments. The only way to manage an environment at cloud speed and scale is with automation. So automation is the reality in pretty much every virtualized environment, and making inroads in non-virtual security as well.

So what can you do to get comfortable with automation? Automate things! No, we aren’t being cheeky. You need to start simple – perhaps by implementing blocking rules based on very bad IP reputation scores. Monitor your environment closely to ensure minimal false positives. Tune your rules if necessary, and then move on to another use case.

Not all situations where automated response make sense are related to threat intelligence. In case of a data breach, lockdown, or zero-day attack (either imminent or in progress), you might want to implement (temporary) blocks or workarounds automatically based on predefined policies. For example if you detect a device or cloud instance acting strangely, you can automatically move it to a quarantine network (or security zone) for investigation. This doesn’t (and shouldn’t) require human intervention, so long as you are comfortable with your trigger criteria.

Useful TI

Now let’s consider collecting external data useful for preventing attacks. This includes the following types of threat intelligence:

  • File Reputation: Reputation can be thought of as just a fancy term for traditional AV, but whatever you call it, malware proliferates via file transmission. Polymorphic malware does make signature matching much harder, but not impossible. The ability to block known-bad files close to the edge of the network is valuable – the closer to the edge, the better.
  • Adversary Networks: Some networks are just no good. They are run by non-reputable hosting companies who provide a safe haven for spammers, bot masters, and other online crime factions. There is no reason your networks should communicate with these kinds of networks. You can use a dynamic list of known bad and/or suspicious addresses to block ingress and egress traffic. As with any intelligence feed, you should monitor effectiveness, because known-bad networks changes every second of every day and keeping current is critical.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices, especially on endpoints. The shiny new term for an attack signature is Indicator of Compromise. But whatever you call it, an IoC is a handy machine-readable way to identify registry, configuration, or system file changes that indicate what malicious code does to devices – which is why we call it a malware indicator. This kind of detailed telemetry from endpoints and networks enables you to prevent attacks on the way in, and take advantage of others’ misfortune.
  • Command and Control Patterns: One specialized type of adversary network detection is intelligence on Command and Control (C&C) networks. These feeds track global C&C traffic to pinpoint malware originators, botnet controllers, and other IP addresses and sites to watch for as you monitor your environment.
  • Phishing Sites: Current advanced attacks tend to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically find email the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and their tactics, and give you something to block on your web filters and email security services.

Ensuring your job security is usually job #1, so iterate and tune processes aggressively. Do add new data sources and use cases, but not too fast. Make sure you don’t automate a bad process, which causes false positives and system downtime. Slow and steady wins this race.

We will wrap up this series with our next post, on building out your threat intelligence gathering program. With all these cool use cases for leveraging TI to improve your security program, you should make sure it is reliable, actionable, and relevant.

Share: