Login  |  Register  |  Contact
Thursday, March 31, 2016

Maximizing Value From Your WAF [New Series]

By Adrian Lane

Web Application Firewalls (WAFs) have been in production use for well over a decade, maturing from point solutions primarily blocking SQL injection to mature application security tools. In most mature security product categories, such as anti-virus, there hasn’t been much to talk about, aside from complaining that not much has changed over the last decade. WAFs are different: they have continued to evolve in response to new threats, new deployment models, and a more demanding clientele’s need to solve more complicated problems. From SQL injection to cross-site scripting (XSS), from PCI compliance to DDoS protection, and from cross-site request forgeries (CSRF) to 0-day protection, WAFs have continued add capabilities to address emerging use cases. But WAF’s greatest evolution has taken place in areas undergoing heavy disruption, notably cloud computing and threat analytics.

WAFs are back at the top of our research agenda, because users continue to struggle with managing WAF platforms as threats continue to evolve. The first challenge has been that attacks targeting the application layer require more than simple analysis of individual HTTP requests – they demand systemic analysis of the entire web application session. Detection of typical modern attack vectors including automated bots, content scraping, fraud, and other types of misuse, all require more information and deeper analysis. Second, as the larger IT industry flails to find security talent to manage WAFs, customers struggle to keep existing devices up and running; they have no choice but to emphasize ease of set-up and management during product selection.

So we are updating our WAF research. This brief series will discuss the continuing need for Web Application Firewall technologies, and address the ongoing struggles of organizations to run WAFs. We will also focus on decreasing the time to value for WAF, by updating our recommendations for standing up a WAF for the first time, discussing what it takes to get a basic set of policies up and running, and covering the new capabilities and challenges customers face.

WAF’s Continued Popularity

The reasons WAF emerged in the first place, and still one of the most common reason customers use it, is that no other product really provides protection at the application layer. Cross-site scripting, request forgeries, SQL injection, and many common attacks which specifically target application stacks tend to go undetected. Intrusion Detection Systems (IDS) and general-purpose network firewalls are poorly suited to protecting the application layer, and remain largely ineffective for that use case. In order to detect application misuse and fraud, a security solution must understand the dialogue between application and end user. WAFs were designed for this need, to understand application protocols so they can identify applications under attack. For most organizations, WAF is still the only way get some measure of protection for applications.

For many years sales of WAFs were driven by compliance, specifically a mandate from the Payment Card Industry’s Data Security Standard (PCI-DSS). This standard gave firms the option to either build security into their application (very hard), or protect them with WAF (easier). The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opted for WAF. Shocking! You basically plug one in and get a certification. WAF has long been the fastest and most cost-effective way to satisfy Requirement 6 of the PCI-DSS standard, but it turns out there is long-term value as well. Users now realize that leveraging a WAF is both faster and cheaper than fixing bug-ridden legacy applications. The need has morphed from “get compliant fast!” to “secure legacy apps for less!”

WAF Limitations

The value of WAF is muted by difficulties in deployment and ongoing platform management. A tool cannot provide sustainable value if it cannot be effectively deployed and managed. The last thing organizations need is yet another piece of software sitting on a shelf. Or even worse an out-of-date WAF providing a false sense of security.

Our research highlighted the following issues which contribute to insecure WAF implementations, allowing penetration testers and attackers to easily evade WAF and target applications directly.

  • Ineffective Policies: Most firms complain about maintaining WAF policies. Some complaints are about policies falling behind new application features, and policies which fail to keep pace with emerging threats. Equally troubling is a lack of information on which policies are effective, so security professionals are flying blind. Better metrics and analytics are needed to tell users what’s working and how to improve.
  • Breaking Apps: Security policies – the rules that determine what a WAF blocks and what passes through to applications – can and do sometimes block legitimate traffic. Web application developers are incentivized to push new code as often as possible. Code changes and new functionality often violate existing policies, so unless someone updates the whitelist of approved application requests for every application change, a WAF will block legitimate requests. Predictably, this pisses off customers and operational folks alike. Firms trying to “ratchet up” security by tightening policies may also break applications, or generate too many false positives for the SoC to handle, leading to legitimate attacks going ignored and unaddressed in a flood of irrelevant alerts.
  • Skills Gap: As we all know, application security is non-trivial. The skills to understand spoofing, fraud, non-repudiation, denial of service attacks, and application misuse within the context of an application are rarely all found in any one individual. But they are all needed to be an effective WAF administrator. Many firms – especially those in retail – complain that “we are not in the business of security” – they want to outsource WAF management to someone with the necessary skills. Still others find their WAF in purgatory after their administrator is offered more money, leaving behind no-one who understands the policies. But outsourcing is no panacea – even a third-party service provider needs the configuration to be somewhat stable and burned-in before they can accept managerial responsibility. Without in-house talent for configuration you are hiring professional services teams to get up and running, and then scrambling to find budget for this unplanned expense.
  • Cloud Deployments: Your on-premise applications are covered by WAFs and you’re reasonably happy with your team’s technical proficiency, but as your company moves applications into public cloud infrastructure (and they are, whether you know about them or not), you may be unable to migrate your WAF and associated policies to this new and fundamentally different architecture. Your WAF vendor may not provide a suitable substitute for the appliance you run on-premise, or it might not offer the APIs you need to orchestrate WAF deployment in your dynamic cloud environment, where applications and servers come and go much faster than ever before.

Going It Alone

If all those problems sound like a nightmare, don’t worry – you’ll still be able to sleep tonight. The nightmare scenario is actually not using some type of WAF or filter to protect your applications, and instead leaving them wide open to attack. Sure, a few of you have been fortunate enough to build your applications from scratch with security in mind, which is awesome. But those unicorns are rare – the rest of us are relegated to fixing vulnerabilities in existing applications… you know, the ones designed and coded without any input from security.

In most software stacks current and future defects are discovered in the underlying platform, third-party software, and in-house applications. That’s just software. Of course attackers know this, so they probe applications for new vulnerabilities. Legacy platforms will take years, if not decades, to meet modern security requirements. And the cost of these efforts is inevitably many times greater the original investment. In many cases it is simply not economically feasible to fix the application, so some other approach must be used – which is where WAF offers tremendous value.

Our research shows that WAF failures far more often result from operational failure than from fundamental product flaws. Make no mistake – WAFs are not a silver bullet – but a correctly deployed WAF makes it much harder to successfully attack an application, and for an attacker to avoid detection. The effectiveness of WAFs is directly related to the quality of people and processes used to maintain them. The most serious problems with WAF are more issues with management and operational processes than technology. We need a pragmatic process to manage Web Application Firewalls to overcome the issues which plague the technology.

Our next post will offer advice on how to deploy WAFs and look at some recent feature enhancements which address the ease of use and effectiveness issues mentioned above. We will also highlight some Quick Wins for new setups, because any new security technology needs an early win to prove the value of an approach and the effectiveness of a technology.

—Adrian Lane

Wednesday, March 30, 2016

Incite 3/30/2016: Rational People Disagree

By Mike Rothman

It’s definitely a presidential election year here in the US. My Twitter and Facebook feeds are overwhelmed with links about what this politician said and who that one offended. We get to learn how a 70-year old politician got arrested in his 20s and why that matters now. You also get to understand that there are a lot of different perspectives, many of which make absolutely no sense to you. Confirmation bias kicks into high gear, because when you see something you don’t agree with, you instinctively ignore it, or have a million reasons why dead wrong. I know mine does.

Some of my friends frequently share news about their chosen candidates, and even more link to critical information about the enemy. I’m not sure whether they do this to make themselves feel good, to commiserate with people who think just like them, or in an effort to influence folks who don’t. I have to say this can be remarkably irritating because nothing any of these people posts is going to sway my fundamental beliefs.


That got me thinking about one of my rules for dealing with people. I don’t talk about religion or politics. Unless I’m asked. And depending on the person I might not engage even if asked. Simply because nothing I say is going to change someone’s story regarding either of those two third rails of friendship. I will admit to scratching my head at some of the stuff people I know post to social media. I wonder if they really believe that stuff, or they are just trolling everyone.

But at the end of the day, everyone is entitled to their opinion, and it’s not my place to tell them their opinion is idiotic. Even if to it is. I try very hard not to judge people based on their stories and beliefs. They have different experiences and priorities than me, and that results in different viewpoints. But not judging gets pretty hard between March and November every 4 years. At least 4 or 5 times a day I click the unfollow link when something particularly offensive (to me) shows up in my feed.

But I don’t hit the button to actually unfollow someone. I use the fact that I was triggered by someone as an opportunity to pause and reflect on why that specific headline, post, link, or opinion bothers me so much. Most of the time it’s just exhaustion. If I see one more thing about a huge fence or bringing manufacturing jobs back to the US, I’m going to scream. I get these are real issues which warrant discussion. But in a world with a 24/7 media cycle, the discussion never ends.

I’m not close-minded, although it may seem that way. I’m certainly open to listening to other candidates’ views, mostly to understand the other side of the discussion and continually refine and confirm my own positions. But I have some fundamental beliefs that will not change. And no, I’m not going to share them here (that third rail again!). I know that rational people can disagree, and that doesn’t mean I don’t respect them, or that I don’t want to work together or hang out and drink beer. It just means I don’t want to talk about religion or politics.


Photo credit: “Laugh-Out-Loud Cats #2204” from Ape Lad

Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Resilient Cloud Network Architectures

Shadow Devices

Building a Vendor IT Risk Management Program

Securing Hadoop

SIEM Kung Fu

Recently Published Papers

Incite 4 U

  1. That depends on your definition of consolidation: Stiennon busts out his trusty spreadsheet of security companies and concludes that the IT security industry is not consolidating. He has numbers. Numbers! That prove there is a steadlily increasing number of companies ‘selling’ security. I guess that is one way to look at it. I think it’s a pretty myopic way of assessing an industry, but hey, what do I know? Here’s a fact. Of the 1,400+ companies on Stiennon’s list, how many are actually selling anything (above $10MM in sales, or even $1MM for that matter?)? How many will be around in 18 months? 12 months? Does it even matter how many companies are in the space? I say no, because here’s what I hear all the time. Security pros want to deal with fewer vendors. Period. It turns out that Mr. Market is not wrong over the long term. There will be a shake-out in the industry, and it will begin soon. Maybe the total number of companies will continue to increase. Evidently there is an infinite number of crappy, undifferentiated ideas for security companies. The real question is what happens to share of wallet, and I’m confident that buyers will be consolidating their spending. – MR

  2. Which way do I go? When it comes to threat analytics, there are many, many services out there. For firms wishing to mine their own data, there are lots of technologies to parse dissimilar data types, and many platforms to do “big data analytics”. But at Dark Reading Kelly Jackson Higgins points out that Threat Intelligence has a Big Data Problem as firms are getting too much of a good thing. Enterprise security’s problem is not a lack of data, analytics tools, or threat feeds. Nor is it a question of whether in-house SoC is better than external services. It’s not even about false negatives or even false positives per se – rather the key questions are “Which analyzed data should I pay attention to?” and “How do I really prioritize?” Having 11 threat feeds is at best information overload, and third party ‘criticality’ rankings are often out of touch with the real risks a company faces. It classic analysis paralysis, as firms figure out which reports are meaningful, map them to risks, and then figure out what they can address in… let’s call it an “enterprise time frame”: the coming year. Having the analysis is the first step, but making it useful is still in the works. We’ll get back to you on dealing with the problems. – AL

  3. Sometimes it is better to be lucky than good… And that is the story of my career, and most of the folks I know. I really liked this coverage on the RSA Conference blog of the Centrify CEO’s talk on how his company was targeted by financial fraud and it almost worked. You have heard this story a hundred times. Email comes from the CEO to accounting to initiate a funds transfer. The funds are moved, and then it becomes apparent that the request was fraudulent. Thankfully Centrify had a policy that required multiple approvals on substantial wire transfers, otherwise the money would be gone. Kudos for sharing. And this is a good reminder that separation of duties and multiple approvals are good things. – MR

  4. Big data, small adoption: O’Reilly published Spiderbook’s research on the size of The Big Data Market, conducted by data mining public press releases, forums, job postings, and the like. And there is some interesting data in the survey: results suggest that only a small percentage of companies in the world use Hadoop, and many that do struggle due to a shortage of technical talent. Does anyone not lack technical talent today? Spiderbook went on to say that outside financial services adoption is low, but they found that large firms are much more likely to embrace big data than small ones. To which I say “Duh!” – large enterprises with huge amounts of interesting data went looking for something that could provide analytics at scale without costing millions of dollars. Our research shows large enterprises have a better than 50% Hadoop adoption rate, but it is only being used for select new projects. I maintain Spiderbook’s adoption numbers are likely on the low side; both because organizations are not particularly open about their use of Hadoop given the ‘skunkworks’ nature of many projects, and because attendance at industry trade shows for the major commercial Hadoop vendors suggests more companies are actively running it – or too much idiotic VC money being burned on campfires. Which is our way of saying you probably have Hadoop in your company, and if you’re in IT security, you’ll be dealing with a cluster full of sensitive data sooner rather than later. Get ready. – AL

  5. Is anything good or bad? I generally refute the premise of this recent NetworkWorld article: Is DevOps good or bad for security? The reality is that some aspects of DevOps (just like anything else) are ‘good’ while others are ‘bad.’ Getting poked in the eye is ‘bad’, right? Unless it’s an optometrist removing fixing cataract. Then it’s good. DevOps clearly put pressure on security teams. That may be bad because there is a distinct lack of skills. But it’s good because it forces security teams to embrace automation, and insert security into development and deployment processes. That article is basically a lovefest, talking about the benefits of DevOps, but if you do it wrong it’s a security train wreck. As with most things, there are no absolutes. Wnd we believe in the long-term value of DevOps (we’re betting the company on it), but we aren’t naive – we know that there are challenges. – MR

  6. BONUS: Generalissimo Franco is still dead! This breaking news just in: Oracle stuns the industry by moving ‘The Cloud’, big data and DevOps – all of it – into a single machine. News at 11. – AL

—Mike Rothman

Tuesday, March 29, 2016

Resilient Cloud Network Architectures: Design Patterns

By Mike Rothman

We introduced resilient cloud networks in this series’ first post. We define them as networks using cloud-specific features to provide both stronger security and higher availability for your applications. This post will dig into two different design patterns, and show how cloud networking enables higher resilience.

Network Segregation by Default

Before we dive into design patterns let’s make sure we are all clear on using network segmentation to improve your security posture, as discussed in our first post. We know segmentation isn’t novel, but it is still difficult in a traditional data center. Infrastructure running different applications gets intermingled, just to efficiently use existing hardware. Even in a totally virtualized data center, segmentation requires significant overhead and management to keep all applications logically isolated – which makes it rare.

What is the downside of not segmenting properly? It offers adversaries a clear path to your most important stuff. They can compromise one application and then move deeper into your environment, accessing resources not associated with the application stack they first compromised. So if they bust one application, there is a high likelihood they’ll end up with free rein over everything in the data center.

The cloud is different. Each server in a cloud environment is associated with a security group, which defines with very fine granularity which other devices it can communicate with, and over what protocols. This effectively enables you to contain an adversary’s ability to move within your environment, even after compromising a server or application. This concept is often called limiting blast radius. So if one part of your cloud environment goes boom, the rest of your infrastructure is unaffected.

This is a key concept in cloud network architecture, highlighted in the design patterns below.

PaaS Air Gap

To demonstrate a more secure cloud network architecture, consider an Internet-facing application with both web server and application server tiers. Due to the nature of the application, communications between the two layers are through message queues and notifications, so the web servers don’t need to communicate directly with each other. The application server tier connects to the database (a Platform as a Service offering from the cloud provider). The application server tier also communicates with a traditional data center to access other internal corporate data outside the cloud environment.

An application must be architected for the get-go to support this design. You aren’t going to redeploy your 20-year-old legacy general ledger application to this design. But if you are architecting a new application, or can rearchitect existing applications, and want total isolation between environments, this is one way to do it. Let’s describe the design.

PaaS Air Gap

Network Security Groups

The key security control typically used in this architecture is a Network Security Group, allowing access to the app servers only from the web servers, and only to the specific port and protocol required. This isolation limits blast radius. To be clear, the NSG is applied individually to each instance – not to subnets. This avoids a flat network, where all instances within a subnet have unrestricted access to all subnet peers.

PaaS Services

In this application you wouldn’t open access from the web server NSG to the app server NSG, because the architecture doesn’t require direct communication between web servers and app servers. Instead the cloud provider offers a message queue platform and notification service which provide asynchronous communication between the web and application tiers. So even if the web servers are compromised, the app servers are not accessible.

Further isolation is offered by a PaaS database, also offered by the cloud service provider. You can restrict requests to the PaaS DB to specific Network Security Groups. This ensures only the right instances can request information from the database service, and all requests are authorized.

Connection to the Data Center

The application may require data from the data center, so the app servers have access to the needed data through a VPN. You route all traffic to the data center through this inspection and control point. Typically it’s better not to route cloud traffic through inspection bottlenecks, but in this design pattern it’s not a big deal, because the traffic needs to pass over a specific egress connection to the data center, so you might as well inspect there as well.

You ensure ingress traffic over that connection can only go to the app server security group. This ensures that an adversary who compromises your network cannot access your whole cloud network by bouncing through your data center.

Advantages of This Design

  • Isolation between Web and App Servers: By putting the auto-scaling groups in a Network Security Groups, you restrict their access to everything.

  • No Direct Connection: In this design pattern you can block direct traffic to the application servers from anywhere but the VPN. Intra-application traffic is asynchronous via the message queue and notification service, so isolation is complete.

  • PaaS Service: This architecture uses cloud provider services, with strong built-in security and resilience. Cloud providers understand that security and availability are core to their business.

What’s next for this kind of architecture? To advance this architecture you could deploy mirrors of the application in different zones within a region to limit the blast radius in case one device is compromised, and to provide additional resiliency in case of a zone failure.

Additionally, if you use immutable servers within each auto-scale group, you can update/patch/reconfigure instances automatically by changing the master image and having auto-scaling replace the old instances with new ones. This limits configuration drift and adversary persistence.

Multi-Region Web Site

This architecture was designed to deploy a website in multiple regions, with availability as close to 100% as possible. This design is far more expensive than even running in multiple zones within a single region, because you need to pay for network traffic between regions (compared to free intra-region traffic); but if uptime is essential to your business, this architecture improves resiliency.

Availability Architecture

This is an externally facing application so you run traffic through a cloud WAF to get rid of obvious attack traffic. Inbound sessions can be routed intelligently to either region via the DNS service. So you can route based on server utilization, network traffic, geography, or a variety of other criteria. This is a great example of software-defined security, programming traffic distribution within cloud stacks.

  • Network Security Groups: In this design pattern you implement Network Security Groups to lock down traffic into the app servers. That isn’t shown specifically because it would greatly complicate the diagram. But Network Security Groups for the web and app servers should be part of this architecture.
  • Compute Layer: Application and web servers are in auto-scale groups within each region. The load balancer distributes the sessions between the regions intelligently as well to ensure the most effective usage of web site.
  • Database Layer: If this was just a multi-zone deployment you wouldn’t need to worry about database replication, as that capability is built into PaaS databases. But that is not the case here. You are operating in multiple regions, so you need to replicate your databases. It is like having two separate data centers. We cannot tackle the network architecture to support database replication here, because that would also overcomplicate the architecture. We just need to point out another way operating in multiple regions adds complexity.
  • Static Files: This website of course includes a variety of static files, so you need to figure out how to keep the file stores in sync between regions as well. Using Network Security Groups, you can lock access to the storage buckets down to specific groups or instances. That’s a good way to make sure you don’t get malware files loaded up onto your website. Cross-region replication is a service your cloud provider may offer so you don’t need to build it yourself.

Advantages of This Design

This architecture is primarily intended to show how the cloud provides enables you to easily establish an application in multiple regions, similar to multiple data centers. So what’s unique about the cloud? You can take the entire stack in Region A and copy it to Region B with a few clicks in your cloud console. Of course you’d have some networks to reconfigure, but in most ways the environment is identical. Auto-scale groups work off the same images, so the operational overhead of supporting multiple cloud regions is drastically lower than operating across multiple data centers.

There are also significant availability advantages. In case a region goes down, the DNS service will detect that automatically and route all new incoming sessions to the available region. Existing sessions in the region will be lost, so there is some collateral damage, but that happens any time you lose a data center. Those sessions can be re-established to the available region. When the region recovers DNS will automatically start sending new sessions to it. This is all transparent to the application and users.

You can also provide a central spot for both static files and logs by using cross-region replication. This capability is currently specific to Amazon Web Services, but we expect it to be a critical feature on all cloud infrastructure platforms at some point. As is increasingly the case in the cloud, services which previously required extreme planning and/or additional products to manage, are now built-in platform features.

That’s a good note on which to wrap up this quick series. The cloud provides many capabilities that enable you to deploy applications significantly more securely and reliably than rolling them out in your own datacenter. Of course there will be resistance to the new way of thinking – there always is. But you can combat this resistance by using information like the suggestions in this series (and in our other blog posts and reference architectures) to highlight the obvious advantages of these new technologies.

—Mike Rothman

Securing Hadoop: Security Recommendations for Hadoop [New Paper]

By Adrian Lane

We are pleased to release our updated white paper on big data security: Securing Hadoop: Security Recommendations for Hadoop Environments. Just about everything has changed in the four years since we published the original. Hadoop has solidified its position as the dominant big data platform, by constantly advancing in function and scale. While the ability to customize a Hadoop cluster to suit diverse needs has been its main driver, the security advances make Hadoop viable for enterprises. Whether embedded directly into Hadoop or deployed as add-on modules, services like identity, encryption, log analysis, key management, cluster validation, and fine-grained authorization are all available. Our goal for this research paper is first to introduce these technologies to IT and security teams, and also to help them assemble these technologies into an coherent security strategy.


This research project provides a high-level overview of security challenges for big data environments. From there we discuss security technologies available for the Hadoop ecosystem, and then sketch out a set of recommendations to secure big data clusters. Our recommendations map threats and compliance requirements directly to supporting technologies to facilitate your selection process. We outline how these tactical responses work within the security architectures which firms employ, tailoring their approaches to the tools and technical talent on hand.

Finally, we would like to thank Hortonworks and Vormetric for licensing this research. Without firms who appreciate our work enough to license our content, we could not bring you quality research free! We hope you find this research helpful in understanding big data and its associated security challenges.

You can download a free copy of the white paper from our research library, or grab a copy directly: Securing Hadoop: Security Recommendations for Hadoop Environments (PDF).

—Adrian Lane

Friday, March 25, 2016

Resilient Cloud Network Architectures: Fundamentals

By Mike Rothman

As much as we like to believe we have evolved as a species, people continue to be scared of things they don’t understand. Yes, many organizations have embraced the cloud whole hog and are rushing headlong into the cloud age. But it’s a big world, and millions of others remain paralyzed – not really understanding cloud computing, and taking the general approach that it can’t be secure because, well, it just can’t. Or it’s too new. Or some for other unfounded and incorrect reason. Kind of like when folks insisted that the Earth was the center of the universe.

This blog series builds on our recent Pragmatic Security for Cloud and Hybrid Networks paper, focusing on cloud-native network architectures that provide security and availability in ways you cannot accomplish in a traditional data center. This evolution will take place over the next decade, and organizations will need to support hybrid networks for some time.

But for those ready, willing, and able to step forward into the future today, the cloud is waiting to break the traditional rules of how technology has been developed, deployed, scaled, and managed. We have been aggressive in proselytizing our belief that the move towards the cloud is the single biggest disruption in technology for the next few decades. Yes, even bigger than the move from mainframes to client/server (we’re old – we know). So our Resilient Cloud Network Architectures series will provide the basics of cloud network security, with a few design patterns to illustrate.

We would like to thank Resilient Systems for provisionally agreeing to license the content in this paper. As always, we’ll build the content using our Totally Transparent Research methodology, mean we will post everything to the blog first, and allow you (our readers) to poke holes in it. Once it has been sufficiently prodded, we will publish a paper for your reference.

Defining Resilient

If we bust out the old dictionary to define resilient, we get:

able to become strong, healthy, or successful again after something bad happens able to return to an original shape after being pulled, stretched, pressed, bent, etc.

In the context of computing, you want to deploy technology that can not just become strong again, but resist attack in the first place. Recoverability is also key: if something bad happens you want to return service quickly, if it causes an outage at all. For network architecture we always fall back on the cloud computing credo: Design for failure. A resilient network architecture both makes it harder to compromise an application and minimizes downtime in case of an issue.

Key aspects of cloud computing which provide security and availability include:

  • Network Isolation: Using the inherent ability of the cloud to restrict connections (via software firewalls, which are called security groups and described below), you can build a network architecture that fully isolates the different tiers of an application stack. That prevents a compromise in one application (or database) from leaking or attacking information stored in another.
  • Account Isolation: Another important feature of the cloud is the ability to use multiple accounts per application. Each of your different environments (Dev, Test, Production, Logging, etc.) can use different accounts, which provides valuable isolation because you cannot access cloud infrastructure across accounts without explicit authorization.
  • Immutability: An immutable server is one that is never logged into or changed in production. In cloud-native DevOps environments servers are deployed in auto-scale groups based on standard images. This prevents human error and configuration drift from creating exploitation paths. You take a new known-good state, and completely replace older images in production. No more patching and no more logging into servers.
  • Regions: You could build multiple data centers around the world to provide redundancy. But that’s not a cheap option, and rarely feasible. To do the same thing in the cloud, you basically just replicate an entire environment in a different region via an API call or a couple clicks in a cloud console. Regions are available all over the world, with multiple availability zones within each, to further minimize single points of failure. You can load balance between zones and regions, leveraging auto-scaling to keep your infrastructure running the same images in real time. We will explain this design pattern in our next post.

The key takeaway is that cloud computing provides architectural options which are either impossible or economically infeasible in a traditional data center, to provide greater protection and availability. This series we will describe the fundamentals of cloud networking for context, and then dig into design patterns which provide both security and availability – which we define as resilience.

Understanding Cloud Networks

The key difference between a network in your data center and one in the cloud is that cloud customers never access the ‘real’ network or hardware. Cloud computing uses virtual networks to abstract the networks you see and manage from the (invisible) underlying physical resources. When your server gets IP address, that IP address does not exist on routing hardware – it’s a virtual address on a virtual network. Everything is handled in software.

Cloud networking varies across cloud providers, but differs from traditional networks in visibility, management, and velocity of change. You cannot tap into a cloud provider’s virtual network, so you’ll need to think differently to monitor your networks. Additionally, cloud networks are typically managed via scripts or programs, making Application Programming Interfaces (API) calls, rather than a graphical console or command line. That enables developers to do pretty much anything, including standing up networks and reconfiguring them – instantly via code.

Finally, cloud networks change much faster than physical networks because cloud environments change faster, including spinning up and shutting down servers via automation. So traditional workflows to govern network change don’t really map to your cloud network. It can be confusing because cloud networks look like traditional networks, with their own routing tables and firewalls. But looks are deceiving – although familiar constructs have been carried over, there are fundamental differences.

Cloud Network Architectures

In order to choose the right solution to address your requirements, you need to understand the types of cloud network architectures and the different technologies that enable them. There are two basic types of cloud network architectures:

  • Public Cloud Networks: These are entirely Internet-facing. You connect to your instances (servers) via the public Internet with no special routing; every instance has a public IP address.
  • Private Cloud Networks: Also called “virtual private clouds” or VPCs, these look like internal LANs using private IP addresses. You access these networks via some kind of non-public connection – typically a VPN.

Cloud networks are enabled and supported by the following technologies:

  • Internet Gateways: A gateway connects your cloud network to the Internet. You don’t normally manage it directly – your cloud provider does it for you because their tools move packets from ‘your’ internal network to the Internet.
  • Internal Gateways: These devices connect existing datacenters to your private cloud network. You access networks via a VPN provided by the cloud provider or a direct connection, which looks a lot like a traditional point-to-point connection from days gone by.
  • Virtual Private Networks: You can also set up your own overlay network to bridge your private and public cloud networks within your cloud provider. This provides a private segment with access for users, developers, and administrators.

These terms will come into play when we present design patterns in our next post.

Network Security Controls

Your network is different in the cloud, so your network security controls will be different as well. But you can take some comfort from the familiar categories. Cloud network controls fall generally into five buckets:

  1. Perimeter Security: These controls generally provide coarse protection from very common network-based attacks, including Denial of Service. Your cloud provider provides and manages these controls; you have no visibility or control.
  2. Software Firewalls: These firewalls are built into the cloud platform (they are called security groups in AWS) and protect cloud assets such as instances. They offer basic access control via ports/protocols and sources/destinations, and are designed to handle auto-scaling and cloud environments. Thy combine the best of network and host firewalls, allowing you to deploy policies on individual servers (or even network interfaces) like a host firewall, but manage them like network firewalls. They will be your main tool to provide virtual network isolation, described above.
  3. Access Control Lists: While a software firewall works at a per-instance (or per-object) level, ACLs restrict communications between subnets of your virtual network. Old-school networking folks will be familiar with using ACLs to control access into and out of the subnets in a (virtual) cloud network.
  4. Virtual Appliances: A number of traditional network security tools, including IDS/IPS, WAF, and NGFW, are available as virtual appliances to improve network security, but they require you to route cloud traffic through these devices.
  5. Host Security Agents: These agents are built into immutable server images, and provide visibility and protection to each server/instance in a cloud environment.

The thing about cloud networking is that you don’t need to apply the same controls, or even configurations, to an entire network. You can make architectural and security control decisions per project. You might decide an entirely cloud-based VPC is best for one application, while for another you choose to build an overlay VPN to connect a totally different VPC to your datacenter to support a hybrid environment. You might need to route one application’s traffic through an inspection point to prevent data leakage, while for another you rely exclusively on security groups to provide full isolation between different layers of your cloud stack. The permutations are infinite, and provide flexibility you cannot have in your data center.

These fundamentals should provide the context you’ll need to understand the design patterns we will present in our next post.

—Mike Rothman

Thursday, March 24, 2016

Incite 3/23/2016: The Madness

By Mike Rothman

I’m not sure why I do it, but every year I fill out brackets for the annual NCAA Men’s College basketball tournament. Over all the years I have been doing brackets, I won once. And it wasn’t a huge pool. It was a small pool in my office, when I used to work in an office, so the winnings probably didn’t even amount to a decent dinner at Fuddrucker’s. I won’t add up all my spending or compare against my winning, because I don’t need a PhD in Math to determine that I am way below the waterline.

Like anyone who always questions everything, I should be asking myself why I continue to play. I’m not going to win – I don’t even follow NCAA basketball. I’d have better luck throwing darts at the wall. So clearly it’s not a money-making endeavor.

extra large bracket

I guess I could ask the same question about why I sit in front of a Wheel of Fortune slot machine in a casino. Or why I buy PowerBall tickets when the pot goes above $200MM. I understand statistics – I know I’m not going to win slots (over time) or the lottery (ever).

They call the NCAA tournament March Madness – perhaps because most people get mad when their brackets blow up on the second day of the tournament when the team they picked to win it all loses to a 15 seed. Or does that just happen to me? But I wasn’t mad. I laughed because 25% of all brackets had Michigan State winning the tournament. And they were all as busted as mine.

These are rhetorical questions. I play a few NCAA tournament brackets every year because it’s fun. I get to talk smack to college buddies about their idiotic picks. I play the slots because my heart races when I spin the wheel and see if I got 35 points or 1,000. I play the lottery because it gives me a chance to dream. What would I do with $200MM?

I’d do the same thing I’m doing now. I’d write. I’d sit in Starbucks, drink coffee, and people-watch, while pretending to write. I’d speak in front of crowds. I’d explore and travel with my loved ones. I’d still play the brackets, because any excuse to talk smack to my buddies is worth the minimal donation. And I’d still play the lottery. And no, I’m not certifiable. I just know from statistics that I wouldn’t have any less chance to win again just because I won before. Score 1 for Math.


Photo credit: “Now, that is a bracket!” from frankieleon

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Shadow Devices

Building a Vendor IT Risk Management Program

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. Enough already: Encryption is a safeguard for data. It helps ensure data is used the way its owner intends. We work with a lot of firms – helping them protect data from rogue employees, hackers, malicious government entities, and whoever else may want to misuse their data. We try to avoid touching political topics on this blog, but the current attempt by US Government agencies to paint encryption as a terrorist tool is beyond absurd. They are effectively saying security is a danger, and that has really struck a nerve in the security community. Forget for a minute that the NSA already has all the data that moves on and off your cellphone, and that law enforcement already has the means to access the contents of iPhones without Apple’s assistance. And avoid wallowing in counter-examples where encryption aided freedom, or illustrations of misuse of power to inspire fear in the opposite direction. These arguments devolve into pig-wrestling – only the pig enjoys that sort of thing. As Rich explained in Do We Have a Right To Security?, this is a simple question of whether anyone (companies or individuals) can have security. Currently the US government (at least the executive branch) says ‘No!’ – as does the UK government. – AL

  2. The US blinks… Following up Adrian’s rant above, the US government decided after all that they may not need Apple to open the San Bernadino iPhone after all. Evidently a third party would be happy to sell the US government either an exploit or another means to get access to the locked phone. Duh. Like we didn’t already know that was possible. As many of us argued, this case was much more about establishing a precedent for the FBI than about accessing that specific phone. Now that it looks like an uphill climb to win that motion, it’s time to save face and do what they should have done in the first place. Pay someone to break the phone, if they think it’s that important. We have huge respect for law enforcement and what they do, but we could do with less grandstanding and backdoors. Backdoors are stupid. – MR

  3. Hindsight is 20/20: In the Beretta files goes the case of a Ryan Collins, who was behind the attacks on celebrity iPhones. This is the attacker who stole the pictures. It’s not clear how much he made by selling them, but it was probably not worth the felony violation he will plead to or the associated jail time. They are still looking for the person who actually posted the pictures. But that guy is even dumber – he didn’t make any money, apparently because content wants to be free. All I have to say is: idiots. – MR

  4. Getting chippy: Better than 75% of stores I go into still have tape over their EMV chipped card slots on payment terminals. While it seems merchants are tardy in getting their work done, it’s not always that they are dragging their feet – it may also be the card networks. It appears some merchants who are actively processing EMV cards are getting charged for fraud and chargeback fees because they have yet to complete a certification audit by the card networks. To reverse these charges that supermarket chain filed suit, and is pushing for quick certification. The suit may halt the “liability shift” entirely, which has gotten the card brands’ attention. This entire game of “Pass The Liability” will continue to entertain us until we stop passing credit card numbers around. – AL

  5. Security faith healers: Adam Shostack posted an interesting piece at Dark Reading about how the concepts in The Gluten Lie apply to security. In a nutshell, the health industry has vilified gluten, and besides the people who have legitimate celiac disease, the data doesn’t seem to support the general position that gluten is bad. Adam makes the analogy that telling people to be secure isn’t going to help. Nor is telling them not to do things (like surf pr0n). And folks should drop the fear-based marketing. Yeah, right. A lot of technology marketing is selling snake oil, and it’s as bad in security as anywhere else. But as long as a tactic works (including vilifying gluten to sell more gluten-free stuff) free market economics say that that tactic will continue to be used. Go figure. – MR

—Mike Rothman

Wednesday, March 23, 2016

Shadow Devices: The Exponentially Expanding Attack Surface [New Series]

By Mike Rothman

One of the challenges of being security professionals for decades is that we actually remember the olden days. You remember, when Internet-connected devices were PCs; then we got fancy and started issuing laptops. That’s what was connected to our networks. If you recall, life was simpler then. But we don’t have much time for nostalgia. We are too busy getting a handle on the explosion of devices connected to our networks, accessing our data.

Here is just a smattering of what we see:

  • Mobile devices: Supporting smartphones and tablets seems like old news, mostly because you can’t remember a time when they weren’t on your network. But despite their short history, their impact on mobile networking and security cannot be understated. What’s more challenging is how these devices can connect directly to the cellular data network, which gives them a path around your security controls.
  • BYOD: Then someone decided it would be cheaper to have employees use their own devices, and Bring Your Own Device (BYOD) became a thing. You can have employees sign paperwork giving you the ability to control their devices and install software, but in practice they get (justifiably) very cranky when they cannot do something on their personal devices. So balancing the need to protect corporate data against antagonizing employees has been challenging.
  • Other office devices: Printers and scanners have been networked for years. But as more sophisticated imaging devices emerged, we realized their on-board computers and storage were insecure. They became targets, attacker beachheads.
  • Physical security devices: The new generation of physical security devices (cameras, access card readers, etc.) is largely network connected. It’s great that you can grant access to a locked-out employee, from your iPhone on the golf course, but much less fun when attackers grant themselves access.
  • Control systems and manufacturing equipment: The connected revolution has made its way to shop floors and facilities areas as well. Whether it’s a sensor collecting information from factory robots or warehousing systems, these devices are networked too, so they can be attacked. You may have heard of StuxNet targeting centrifuge control systems. Yep, that’s what we’re talking about.
  • Healthcare devices: If you go into any healthcare facility nowadays, monitoring devices and even some treatment devices are managed through network connections. There are jokes to be made about taking over shop floor robots and who cares. But if medical devices are attacked, the ramifications are significantly more severe.
  • Connected home: Whether it’s a thermostat, security system, or home automation platform – the expectation is that you will manage it from wherever you are. That means a network connection and access to the Intertubes. What could possibly go wrong?
  • Cars: Automobiles can now use either your smartphone connection or their own cellular link to connect to the Internet for traffic, music, news, and other services. They can transmit diagnostic information as well. All cool and shiny, but recent stunt hacking has proven a moving automobile can be attacked and controlled remotely. Again, what’s to worry?

There will be billions of devices connected to the Internet over the next few years. They all present attack surface. And you cannot fully know what is exploitable in your environment, because you don’t know about all your devices.

The industry wants to dump all these devices into a generic Internet of Things (IoT) bucket because IoT is the buzzword du jour. The latest Chicken Little poised to bring down the sky. It turns out the sky has already fallen – networks are already too vast to fully protect. The problem is getting worse by the day as pretty much anything with a chip in it gets networked. So instead of a manageable environment, you need to protect Everything Internet.

Anything with a network address can be attacked. Fortunately better fundamental architectures (especially for mobile devices) make it harder to compromise new devices than traditional PCs (whew!), but sophisticated attackers don’t seem to have trouble compromising any device they can reach. And that says nothing of devices whose vendors have paid little or no attention to security to date. Healthcare and control system vendors, we’re looking at you! They have porous defenses, if any, and once an attacker gains presence on the network, they have a bridgehead to work their way to their real targets.

In the Shadows

So what? You don’t even have medical devices or control systems – why would you care? The sad fact is that what you don’t see can hurt you. Your entire security program has been built to protect what you can see with traditional discovery and scanning technologies. The industry has maintained a very limited concept of what you should be looking for – largely because that’s all security scanners could see. The current state of affairs is you run scans every so often and see new devices emerge. You test them for configuration issues and vulnerabilities, and then you add those issues to the end of an endless list of things you’ll never have time to finish with.

Unfortunately visible devices are only a portion of the network-connected devices in your environment. There are hundreds if not thousands or more other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea about their security posture. Each of thm can be attacked, and may provide an adversary a presence in your environment. Your attack surface is much larger than you thought.

These shadow devices are infrequently discussed, and rarely factored into discovery and protection programs. It’s a big Don’t Ask, Don’t Tell approach, which never seems to work out well in the end.

We haven’t yet published anything on IoT devices (or Everything Internet), but it’s time. Not because we currently see many attacks in the wild. But most organizations we talk to are unprepared for when an attack happens, so they will scramble – as usual. We have espoused a visibility, then control approach to security for over a decade. Now it’s time to get a handle on the visibility of all devices on your network, so when you need to, you will know what you have to control. And how to control it.

So our newest series, entitled “Shining a Light on Shadow Devices,” will focus on the risks these devices present to organizations, and how to find them within your environment. We’d like to thank ForeScout Technologies for tentatively agreeing to license the content and, as always, we’ll write this series using our Totally Transparent Research methodology. This ensures you’ll see all of the research as it comes off the assembly line, and you can keep us honest by letting us know if we’re hitting the mark.

Our next post will dig into how these device classes can be attacked.

—Mike Rothman

Friday, March 18, 2016

Summary: Who pays who?

By Adrian Lane

Adrian here…

Apple buying space on Google’s cloud made news this week, as many people were surprised that Apple relies on others to provide cloud services, but they have been leveraging AWS and others for years. Our internal chat was alive with discussion about build vs. buy for different providers of cloud services. Perhaps a hundred or so companies have the scale to make a go at building from scratch at this point, and the odds of success for many of those are small. You need massive scale before the costs make it worth building your own. Especially the custom engineering required to get equivalent hardware margins. That leave a handful of firms who can make a go of this, and it’s still not always clear whether they should. Even Apple buys others’ services, and it usually makes good economic sense.

We did not really talk about RSA conference highlights, but the Rugged DevOps event (slides are up) was the highlight of RSAC week for me. The presentations were all thought-provoking. Concepts which were consistently reinforced included:

  • Constantly test, constantly improve
  • Without data you’re just another person with an opinion
  • Don’t update; dispose and improve
  • Micro-services and Docker containers are the basic building blocks for application development today

Micro-services make sense to me, and I have successfully used that design concept, but I have zero practical experience with Docker. Which is a shocker because it’s freakin’ everywhere, but I have never yet taken the time to learn. That stops this week. AWS and Azure both support it, and it’s embedded into Big Data frameworks as well, so it’s everywhere I want to be. I saw two vendor presentation on security concerns around Docker deployment models, and yeah, it scares me a bit. But Docker addresses the basic demand for easy updates, packaging, and accelerating deployment, so it stays. Security will iterate improvements to the model over time, as we usually do. DevOps doesn’t fix everything. That’s not me being a security curmudgeon – it’s me being excited by new technologies that let me get work done faster.

—Adrian Lane

Thursday, March 17, 2016

Building a Vendor IT Risk Management Program: Program Structure

By Mike Rothman

As we started exploring when we began Building a Vendor IT Risk Management Program, modern integrated business processes have dramatically expanded the attack surface of pretty much every organization. You can no longer ignore the risk presented by vendors or other business partners, even without regulatory bodies pushing for formal risk management of vendors and third parties. As security program fanatics we figure it’s time to start documenting such a program.

Defining a Program

First we have never really defined what we mean by a security program. Our bad. So let’s get that down, and then we can tailor it to vendor IT risk management. The first thing a program needs is to be systematic, which means you don’t do things willy-nilly. You plan the work and then work the plan. The processes involved in the program need to be predictable and repeatable. Well, as predictable as anything in security can be. Here are some other hallmarks of a program:

  • Executive Sponsorship: Our research shows a program has a much higher chance of success if there is an executive (not the CISO) who feels accountable for its success. Inevitably security involves changing processes, and maybe not doing things business or other IT groups want because of excessive risk. Without empowerment to make those decisions and have them stick, most security programs die on the vine. A senior sponsor can break down walls and push through tough decisions, making the difference between success and failure.

  • Funding: Regardless of which aspect of security you are trying to systematize, it costs money. This contributes to another key reason programs fail: lack of resources. We also see a lot of organizations kickstart new programs by just throwing new responsibilities at existing employees, with no additional compensation or backfill for their otherwise overflowing plates. That’s not sustainable, so a key aspect of program establishment is allocating money to the initiative.

  • Governance: Who is responsible for operation of the program? Who makes decisions when it needs to evolve? What is the escalation path when someone doesn’t play nice or meet agreed-upon responsibilities? Without proper definition of responsibilities, and sufficient documentation so revisionist history isn’t a factor, the program won’t be sustainable. These roles need to be defined when the program is being formally established, because it’s much easier to make these decisions and get everyone on board before it goes live. If it does not go well people will runn for cover, and if the program is a success everyone will want credit.

  • Operations: This will vary greatly between different kinds of programs, but you need to define how you will achieve your program goals. This is the ‘how’ of the program, and don’t forget about an ongoing feedback and improvement loop so the program continues to evolve.

  • Success criteria: In security this can be a bit slippery, but it’s hard to claim success without everyone agreeing what success means. Spend some time during program establishment to focus on applicable metrics, and be clear about what success looks like. Of course you can change your definition once you get going and learn what is realistic and necessary, but if you fail to establish it up front, you will have a hard time showing value.

  • Integration points: No program stands alone, so there will be integration points with other groups or functions within the organization. Maybe you need data feeds from the security monitoring group, or entitlements from the identity group. Maybe your program defines actions required from other groups. If the ultimate success of your program depends on other teams or functions within the organization (and it does, because security doesn’t stand alone), then making sure everyone is crystal clear about integration points and responsibilities from the beginning is critical.

The V(IT)RM Program

To tailor the generic structure above to vendor IT risk management you need to go through the list, make some decisions, and get everyone on board. Sounds easy, right? Not so much, but doing this kind of work now will save you from buying Tums by the case as your program goes operational.

We cannot going to tell you exactly what governance and accountability needs to look like for your program because that is heavily dependent on your culture and organization. Just make sure someone is accountable, and operational responsibilities are defined. In some cases this kind of program resides within a business unit managing vendor relationships, other times it’s within a central risk management group, or it could be somewhere else. You need to figure out what will work in your environment.

One thing to pay close attention to, particularly for risk management, is contracts. You enter business agreements with vendors every day, so make sure the contract language reflects your program objectives. If you want to scan vendor environments for vulnerabilities, that needs to be in your contracts. If you want them to do an extensive self-survey or provide a data center tour, that needs to be there. If your contracts don’t include this kind of language, look at adding an addendum or forcing a contract overhaul at some point. That’s a decision for the business people running your vendors.

  • Defining Vendor Risk: The first key requirement of a vendor risk management program is actually defining categories in which to group your vendors. We will dig into this in our next post, but these categories define the basis for your operation of the entire program. You will need to categorize both vendors and the risks they present so you know what actions to take, depending on the importance of the vendor and the type of risk.

  • Operations: How will you evaluate the risk posed by each vendor? Where will you get the information and how will you analyze it? Do you reward organizations for top-tier security? What happens when a vendor is a flaming pile of IT security failure? Will you just talk to them and inform them of the issues? Will you lock them out of your systems? It will be controversial if you take a vendor off-line, so you need to have had all these discussions with all your stakeholders before any action takes place. Which is why we constantly beat the drum for documentation and consensus when establishing a program.

  • Success Criteria/Metrics: There is of course only one metric that is truly important, and that’s whether a breach resulted from a vendor connection. OK, maybe that’s a bit overstated, but that is what the Board of Directors will focus on. Success likely means no breaches due to vendor exposure. Operationally you can set metrics around the number of vendors assessed (100% may not be practical if you have thousands of vendors), or perhaps how many vendors are in each category, and what is the direction of the trend? There is only so much you can do to impact the security posture of your vendors, but you can certainly take action to protect yourself if a vendor is deemed to pose an unacceptable risk.

  • Tuning: In a V(IT)RM program, the most critical information are the categories of importance and risk. So when tuning the program over time, you want to know how many of your vendors were breached and whether any of those breaches resulted in loss to you. If there was a breach, did you identify the risk ahead of time – basically having a good idea that vendor would have an issue? Or was this a surprise? The objectives of tuning are to eliminate surprises and wasted effort.

Of course many aspects of the program, if not all, change over time as technology improves and requirements evolve. That is to be expected, and part of the program has to be a specific set of activities focused around gathering feedback and tuning the program as described above. We also believe strongly that programs need to be documented (yes, written down), so if (or should we say when) something goes south you have documentation that someone else understood the potential issues. Even if you write it in pencil, write it and make sure all of the stakeholders understand what they agreed to.

Our next post will dig into the risk and importance categories, and how to gather that kind of information without totally relying on vendor self-reporting.

—Mike Rothman

Wednesday, March 16, 2016

Firestarter: The Rugged vs. SecDevOps Smackdown

By Rich

After a short review of the RSA Security Conference, Rich, Mike, and Adrian debate the value of using labels like “Rugged DevOps” or “SecDevOps”. Rich sees them as different, Mike wonders if we really need them, and Adrian has been tracking their reception on the developer side of the house. Okay, it’s pathetic as smackdowns go, but you wouldn’t have read this far if we didn’t give it an interesting title.

Watch or listen:


Tuesday, March 15, 2016

Building a Vendor IT Risk Management Program: Understanding Vendor IT Risk

By Mike Rothman

Outsourcing is nothing new. Industries have been embracing service providers for functions they either couldn’t or didn’t want to perform for years. This necessarily involved integrating business systems and providing these third-party vendors with access to corporate networks and computer systems. The risk was generally deemed manageable and rationalized by the business need for those integrated processes. Until it wasn’t.

The post-mortem on a recent very high-profile data breach indicated the adversary got into the retailer’s network, not through their own systems, but instead through a trusted connection with a third-party vendor. Basically the attacker owned a small service provider, and used that connection to gain a foothold within the real target’s environment. The path of least resistance into your environment may no longer be through your front door. It might be through a back door (or window) you left open for a trading partner.

Business will continue to take place, and you will need to provide access to third parties. Saying ‘no’ is not an option. But you can no longer just ignore the risks vendors present. They dramatically expand your attack surface, which now includes the environments of all the third parties with access to your systems. Ugh.

This could be thousands of different vendors. No, we aren’t forgetting that most of you don’t have the skills or resources to stay on top of your own technology infrastructure – not to mention critical data moving to cloud resources. Now you also need to worry about all those other organizations you can neither control nor effectively influence. Horrifying.

This is when you expect Tom Cruise to show up, because this sounds like the plot to the latest Mission: Impossible sequel. But unfortunately this is your lot in life. Yet there is hope, because threat intelligence services can now evaluate the IT risk posed by your trading partners, without needing access to their networks.

Our new Building a Vendor Risk Management Program series we will go into why you can no longer ignore vendor risk, and how these services can actually pinpoint malicious activity on your vendors’ networks. But just having that information is (no surprise) not enough. To efficiently and effectively manage vendor risk you need a systematic program to evaluate dangers to your organization and objectively mitigate them.

We would like to thank our friends at BitSight Technologies, who have agreed to potentially license the content in this series upon completion. As always, we will write the series using our Totally Transparent Research methodology in a totally objective and balanced way.


You know something has been a problem for a while when regulators establish guidance to address the problem. Back in 2013 the regulators overseeing financial institutions in the US seemed to get religion about the need to assess and monitor vendor risk, and IT risk was a subset of the guidance they produced. Of course, as with most regulation, enforcement has been spotty and didn’t really offer a prescriptive description of what a ‘program’ consists of. It’s not like the 12 (relatively) detailed requirements you get with the PCI-DSS.

In general, the guidance covers some pretty straightforward concepts. First you should actually write down your risk management program, and then perform proper due diligence in selecting a third party. I guess you figure out what ‘proper’ means when the assessor shows up and lets you know that your approach was improper. Next you need to monitor vendors on an ongoing basis, and have contingency plans in case one screws up and you need to get out of the deal. Finally you need program oversight and documentation, so you can know your program is operational and effective. Not brain surgery, but also not very specific.

The most detail we have found comes from the OCC (Office of the Comptroller of the Currency), which recommends an assessment of each vendor’s security program in its Risk Management Guidance.

Information Security

Assess the third party’s information security program. Determine whether the third party has sufficient experience in identifying, assessing, and mitigating known and emerging threats and vulnerabilities. When technology is necessary to support service delivery, assess the third party’s infrastructure and application security programs, including the software development life cycle and results of vulnerability and penetration tests. Evaluate the third party’s ability to implement effective and sustainable corrective actions to address deficiencies discovered during testing.

No problem, right? Especially for those of you with hundreds (or even thousands) of vendors within the scope of assessment.

We’ll add our standard disclaimer here, that compliance doesn’t make you secure. It cannot make your vendors secure either. But it does give you a reason to allocate some funding to assessing your vendors and making sure you understand how they affect your attack surface and exploitability.

The Need for a Third-Party Risk Program

Our long-time readers won’t be surprised that we prescribe a program to address a security need. Managing vendor IT risk is no different. In order to achieve consistent results, and be able to answer your audit committee about vendor risk, you need a systematic approach to plan the work, and then work the plan.

Here are the key areas of the program we will dig into in this series:

  • Structuring the V(IT)RM Program: First we’ll sketch out a vendor risk management program, starting with executive sponsorship, and defining governance and policies that make sense for each type of vendor you are dealing with. In this step you will also define risk categories and establish guidelines for assigning vendors to each category.
  • Evaluating Vendor Risk: When assessing vendors you have limited information about their IT environments. This post will dig into how to balance the limitations of what vendors self-report against external information you can glean regarding their security posture and malicious activity.
  • Ongoing V(IT)R Monitoring and Communication: Once you have identified the vendors presenting the greatest risk, and taken initial action, how do you communicate your findings to vendors and internal management? This is especially important for vendor which present significant risk to your environment. Then you need to operationalize the program to systematically keep those evaluations current. Given the rapid pace of IT (and security) change, you cannot simply assume a vendor will stay at the same risk level for an extended period of time.

—Mike Rothman

Thursday, March 10, 2016

SIEM Kung Fu: Getting Started and Sustaining Value

By Mike Rothman

As we wrap up this series on SIEM Kung Fu, we have discussed SIEM Fundamentals and some advanced use cases to push your SIEM beyond its rather limited out-of-the-box capabilities. To make the technology more useful over time, you should revisit your SIEM operation process.

Many failed SIEM projects over the past 10 years have not been technology failures. More stumble over a lack of understanding of the amount of time and resources needed to get value from the SIEM in early deployments and over time, the amount of effort required to keep them current and tuned. So a large part of SIEM Kung Fu is just making sure you have the people and process in place to leverage the technology effectively and sustainably.

Getting Started

As a matter of practice you should be focused on getting quick value out of any new technology investment, and SIEM is no exception. Even if you have had the technology in place for years, it’s useful to take a fresh look at the implementation to see if you missed any low-hanging fruit that’s there for the taking. Let’s assume you already have the system up and running, are aggregating log and event sources (including things like vulnerability data and network flows), and have already implemented some out-of-the-box policies. You already have the system in place – you are just underutilizing it.


For a fresh look at SIEM we recommend you start with adversaries. We described adversary analysis in detail in the CISO’s Guide to Advanced Attackers (PDF). Start by determining who is most likely to attempt to compromise your environment. Defining a likely attacker mission. Then profile potential adversaries to determine the groups most likely to attack you. At that point you can get a feel for the most likely Tactics, Techniques, and Procedures (TTPs) for adversaries to use. This information typically comes from a threat intelligence service, although some information sharing groups can also offer technical indicators to focus on.

Armed with these indicators you engage your SIEM to search for them. This is a form of hunting, which we will detail later in this post, and you may well find evidence of active threat actors in your environment. This isn’t a great outcome for your organization, but it does prove the value of security monitoring.

At that point you can triage the alerts you have received from SIEM searches to figure out whether you are dealing with false positives or a full-blown incident. We suggest you start with the attacks of your most likely adversaries, among the millions of indicators you can search for. And odds are you’ll find lots of things, if you search for anything and everything. By initially focusing on adversaries you are restricting your search to the attack patterns most likely to be used against you.

Two Tracks

Once you have picked the low-hanging fruit from adversary analysis, focus shifts toward putting advanced use cases into a systematic process that is consistent and repeatable. Let’s break up the world into two main categories of SIEM operations to describe the different usage models: reactive and proactive.


Reactive usage of SIEM should be familiar because that’s how most security teams function. It’s the alert/triage/respond cycle. The SIEM fires an alert, your tier 1 analyst figure out whether it’s legitimate, and then you figure out how to respond – typically via escalation to tier 2. You can do a lot to refine this process as well, so even if you are reacting you can do it more efficiently. Here are a few tips:

  1. Leverage Threat Intel: As we described above under adversary analysis, and in our previous post, you can benefit from the misfortune of others by integrating threat intelligence into your SIEM searches. If you see evidence of a recent attack pattern (provided by threat intel) within your environment, you can get ahead of it. We described this in our Leveraging Threat Intel in Security Monitoring paper. Use it – it works.
  2. User Behavioral Analytics (UBA): You can also figure out the relative severity of a situation by tracking the attack to user activity. This involves monitoring activity (and establishing the baselines/profiles described in our last post) not just by device, but also aggregating data and profiling activity for individuals. For example, instead of just monitoring the CEO’s computer, tablet, and smartphone independently, you can look at all three devices to establish a broader profile of the CEO’s activity. Then if you see any of her devices acting outside that baseline, that would trigger an alert you can triage/investigate.
    1. Insider Threat: You can also optimize some of your SIEM rules around insiders. During many attacks an adversary eventually gains a foothold in your environment and becomes an insider. You can optimize your SIEM rules to look for activity specifically targeting things you know would be valuable to insiders, such as sensitive data (both structured and unstructured). UBA is also useful here because you are profiling an insider and can watch for them doing strange reconnaisance, or possibly moving an uncharacteristially large amount of data.
    2. Threat Modeling: Yes, advanced SIEM users still work through the process of looking at specific, high-value technology assets and figuring out the best ways to compromise them. This is predominately used in the “external stack attack” use case described last post. By analyzing the ways to break an application (or technology stack), SOC analysts can build SIEM rules from those attack patterns, to detect evidence an asset is being targeted.

Keep in mind that you need to consistently look at your SIEM ruleset, add new attack patterns/use cases, and prune rules that are no longer relevant. The size of your ruleset correlates to the performance and responsiveness of your SIEM, so you need to balance looking for everything (and crushing the system) against your chance of missing something.

This is a key part of the ongoing maintenance required to keep your SIEM relevant and valuable. Whether you get new rules from a threat intelligence vendor, drinking buddies, or conferences, new rules require time to refine thresholds and determine relevance to your organization. So we reiterate that SIEM is not a “set it and forget it” technology – no security analytics tool is. Anyone telling you different is selling you a bill of goods.


Before we dive into the concept of proactivity we need to spend a minute on our soapbox about the general idea of “getting ahead of the threat.” We (still) don’t believe you can get ahead of threats or detect zero-day attacks, or any other such security marketing nonsense. What you can do is shorten the window between when you are attacked and when you know about it. So that’s the main objective of SIEM Kung Fu: to shorten this window by whatever means you can.

The reactive approach is to set SIEM rules to fire alerts based on certain conditions, and then react to them. The proactive approach is to task a human with trying to find situations where attacks are happening, which wouldn’t trigger an alert. These folks like to be called hunters, which sounds much better than “SOC analyst”.

Why wouldn’t the SIEM alert have fired already? Maybe the attack is just beginning. Maybe the adversary is just doing recon and mostly hiding to evade detection. Whatever the cause, the rules you set in the SIEM haven’t triggered yet, and a skilled human may be able to find evidence of the attack before your monitoring tools.

The hunter’s tool set is more about threat intel, effective search and analytics, and a lot of instinct for what attackers will do, than a flexible SIEM rules engine. For example a hunter might see that a recent business partnership your company announced has irritated factions in Eastern Europe. So the hunter does a little research and finds a new method of compromising a recent version of Windows by gaining kernel access and then replacing system files in use by attackers in that region. Then the hunter searches to see whether egress traffic is headed to known C&C channels used by those groups, and also searches your endpoint telemetry for instances where that system file was changed recently. Of course you could set a rule to look for this activity moving forward, but the hunter is able to mine existing security data for that set of conditions to see if an attack has already happened.

As another example a hunter might go to a security conference and learn about a new technique to overflow memory. After playing around with it in your lab, the hunter knows what to look for on each endpoint. Then a search can be initiated for that activity, even though there hasn’t been evidence of that technique being used in the wild yet. Hunters have great leeway to follow their instincts, and SIEM tools need to offer flexible (and fast) enough search to find strings and pull on them.

SIEM Kung Fu for hunters is about giving them the platform to do their job. They are skilled professionals, with their own tools for when they really want to dig into a device or attack. But a SIEM can be very useful for helping them narrow their focus to devices that require more investigation and provide a means to analyze patterns of activity that could yield clues to which threats are active.

Whether you are implementing a set of SIEM rules to react to attacks or giving a set of hunters the ability to identify potential compromises in your environment, your security monitoring platform can be leveraged to enable faster detection and triage. And when you are racing an active adversary, time is not your friend.

With that, we wrap up our SIEM Kung Fu series. We will assemble these posts into a paper over the next week, so if you have any comments or feedback about the research, let us know in the comments, via the Tweeter (@securosis) or via email.

—Mike Rothman

Wednesday, March 09, 2016

Incite 3/9/2016: Star Lord

By Mike Rothman

Everything is a game nowadays. Not like Words with Friends (why yes, since you ask – I do enjoy getting my ass kicked by the women in my life) or even Madden Mobile (which the Boy plays constantly) – I’m talking about gamification. In our security world, the idea is that rank and file employees will actually pay attention to security stuff they don’t give a rat’s ass about… if you make it all into a game. So get departments to compete for who can do best in the phishing simulation. Or give a bounty to the team with the fewest device compromises due to surfing pr0n. Actually, though, it might be more fun to post the link that compromised the machine in the first place. The employee with the nastiest NSFW link would win. And get fired… But I digress.

I find that I do play these games. But not on my own device. I’m kind of obsessed with Starbucks’ loyalty program. If you accumulate 12 stars you get a free drink. It’s a great deal for me. I get a large brewed coffee most days. I don’t buy expensive lattes, and I get the same star for every drink I buy. And if I have the kids with me, I’ll perform 3 or 4 different transactions, so I can get multiple stars. When I get my reward drink, I get a 7 shot Mocha. Yes, 7 shots. I’m a lot of fun in the two hours after I drink my reward.

And then Starbucks sends out promotions. For a while, if you ordered a drink through their mobile app, you’d get an extra star. So I did. I’d sit in their store, bust open my phone, order the drink, and then walk up to the counter and get it. Win! Extra star! Sometimes they’d offer 3 extra stars if you bought a latte drink, an iced coffee, and a breakfast sandwich within a 3-day period. Well, a guy’s gotta eat, right? And I was ordering the iced coffee anyway in the summer. Win! Three bonus stars. Sometimes they’d send a request for a survey and give me a bunch of stars for filling it out. Win! I might even be honest on the survey… but probably not. As long as I get my stars, I’m good.

Yes, I’m gaming the system for my stars. And I have two reward drinks waiting for me, so evidently it’s working. I’m going to be in Starbucks anyway, and drinking coffee anyway – I might as well optimize for free drinks.

star lord

Oh crap, what the hell have I become? A star whore? Ugh. Let’s flip that perspective. I’m the Star Lord. Yes! I like that. Who wants to be Groot?

Pretty much every loyalty program gets gamed. If you travel like I do, you have done the Dec 30 or 31 mileage run to make the next level in a program. You stay in a crappy Marriott 20 miles away from your meeting, instead of the awesome hotel right next to the client’s office. Just to get the extra night. You do it. Everyone does.

And now it’s a cat and mouse game. The airlines change their programs every 2-3 years, to force customers to find new ways to optimize milage accumulation. Starbucks is changing their program to reward customers based on what they spend. The nerve of them. Now it will take twice as long to get my reward drinks. Until I figure out how to game this version of the program. And I will, because to me gaming their game is the game.


Photo credit: “Star-Lord ord” from Dex

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. An expensive lie: Many organizations don’t really take security seriously. It has never been proven that breaches cause lost business (correlation is not causation), nor have compliance penalties been sufficient to spur action. Is that changing? Maybe. You can see a small payment processor like Dwolla getting fined $100K for falsely claiming that “information is securely encrypted and stored”. Is $100K enough? Does it need to be $100MM? I don’t know, but at some point these regulations should have enough teeth taht companies start to take them seriously. But you have to wonder, if a fintech start-up isn’t “securely encrypting and storing” customer data, what the hell are they doing? – MR

  2. Payment tokens for you and me: NFC World is reporting that Visa will retire alternate PANs issued to host card emulators for mobile payments, without giving an actual EOL date. We have been unable to verify this announcement, but it’s not surprising because that specification is at odds with EMVco’s PAR tokenization approach, which we discussed last year – which is leveraged by ApplePay, SamsungPay, and others. This is pretty much the end of host card emulation and any lingering telco secure element payment schemes. What is surprising many people is the fact that, if you read Visa and Mastercard’s recent announcements, they are both positioning themselves as cloud-based security vendors – offering solutions for identity and payment in cars, wearables, and other mobile devices. Visa’s Tokenization Services, Mastercard’s tokens, and several payment wallets all leverage PAR tokens provided by various Tokenization-as-a-Service offerings. And issuing banks are buying this service as well! For security and compliance folks this is good news, because the faster this conversion happens, the faster the enterprise can get rid of credit cards. And once those are gone, so too are all the supporting security functions you need to manage. Security vendors, take note: you have new competitors in mobile device security services. – AL

  3. Well, at least the pace of tech innovation is slowing… I can do nothing but laugh at the state of security compliance. The initiative that actually provided enough detail to help lost organizations move forward, the PCI-DSS, is evidently now very mature. So mature that they don’t need another major update. Only minor updates, with long windows to implement them, because.. well, just because. These retailers are big and they move slowly. But attackers move and innovate fast. So keeping our current low bar forever seems idiotic. Attackers are getting better, so we need to keep raising the bar, and I don’t know how that will happen now. I guess it will take another wave of retailer hacks to shake things up again. Sad. – MR

  4. No need to encrypt: Future Tense captures the essence of Amazon’s removal of encryption from Fire devices: Inexpensive parts, like weak processors, would be significantly burdened when local encryption was on, and everything would slow down. This is not about bowing to federal pressure – it is cost-cutting on a money-losing device. And let’s be honest – these are not corporate devices, and no one reading this allows Amazon Fires onto their business networks. Not every mobile device deserves security hardening. Most people have a handful of devices with throw-away data, and convenience devices need very little security. The handful of people I know with Kindle or Fire devices consider them mobile infotainment systems – the only data on the device is a Gmail account, which has already been hacked, and the content they bought from Amazon. Let’s pick our battles. – AL

  5. I don’t get it, but QUANTUM! I wish I knew more about things like quantum computing, and that I had time to read papers and the like to get informed. Evidently progress is being made on new quantum computing techniques that will make current encryption obsolete. Now they have a 5-atom quantum computer. I have no idea what that even means, but it sounds cool. Is it going to happen tomorrow? Nope. I won’t be able to get a quantum computer from Amazon for a while, but the promise of these new technologies to upend the way we have always done things is useful reminder. Don’t get attached to anything. Certainly not technology, because it’s not going to be around for long. Whichever technology we’re talking about. – MR

—Mike Rothman

Tuesday, March 08, 2016

SIEM Kung Fu: Advanced Use Cases

By Mike Rothman

Given the advance of SIEM technology, the use cases described in the first post of our SIEM Kung Fu series are very achievable. But with the advent of more packaged attack kits leveraged by better organized (and funded) adversaries, and the insider threat, you need to go well beyond what comes out of the [SIEM] box, and what can be deployed during a one-week PoC, to detect real advanced attacks.

So as we dig into more advanced use cases we will tackle how to optimize your SIEM to both a) detect advanced attacks and b) track user activity, to identify possible malicious insider behavior. There is significant overlap between these two use cases. Ultimately, in almost every successful attack, the adversary gains presence on the network and therefore is technically an insider. But let’s take adversaries out of play here, because in terms of detection, whether the actor is external or internal to your organization doesn’t matter. They want to get your stuff.

So we’ll break up the advanced use cases by target. It might be the application stack directly (from the outside), to establish a direct path to the data center, without requiring any lateral movement to achieve the mission. The other path is to compromise devices (typically through an employee), escalate privileges, and move laterally to achieve the mission. Both can be detected by a properly utilized SIEM.

Attacking Employees

The most prominent attack vector we see in practice today is the advanced attack, which is also known as an APT or a kill chain, among other terms. But regardless of what you call it, this is a process which involves an employee device being compromised, and then used as a launching point to systematically move deeper within an organization – to find, access, and exfiltrate critical information. Detecting this kind of attack requires looking for anomalous behavior at a variety of levels within the environment. Fortunately employees (and their devices) should be reasonably predictable in what they do, which resources they access, and their daily traffic patterns.

In a typical device-centric attack an adversary follows a predictable lifecycle: perform reconnaissance, send an exploit to the device, and escalate privileges, then use that device as a base for more reconnaissance, more exploits, and to burrow further into the environment. We have spent a lot of time on how threat detection needs to evolve and how to catch these attacks using network-based telemetry.

Leveraging your SIEM to find these attacks is similar; it involves understanding the trail the adversary leaves, the resulting data you can analyze, and patterns to look for. An attacker’s trail is based specifically on change. During any attack the adversary changes something on the device being attacked. Whether it’s the device configuration, creating new user accounts, increasing account privileges, or just unusual traffic flows, the SIEM has access to all this data to detect attacks.

Initial usage of SIEM technology was entirely dependent on infrastructure logs, such as those from network and security devices. That made sense because SIEM was initially deployed to stem the flow of alerts streaming in from firewalls, IDS, and other network security devices. But that offered a very limited view of activity and eventually become easy for adversaries to evade. So over the past decade many additional data sources have been integrated into the SIEM to provide a much broader view of your environment.

  • Endpoint Telemetry: Endpoint detection has become very shiny in security circles. There is a ton of interest in doing forensics on endpoints, and if you are trying to figure out how the proverbial horse left the barn, endpoint telemetry is great. Another view is that devices are targeted in virtually every attack, so highly detailed data about exactly what’s happening on an endpoint is critical – not just to incident response, but also to detection. And this data (or the associated metadata) can be instrumental when watching for the kind of change that may indicate an active threat actor.
  • Identity Information: Inevitably, once an adversary has presence in your environment, they will go after your identity infrastructure, because that is usually the path of least resistance for access to valuable data. So you need access to identity stores; watch for new account creation and new privilege entitlements, which are both likely to identify attacks in process.
  • Network Flows: The next step in the attack is to move laterally within the environment, and move data around. This leaves a trail on the network that can be detected by tracking network flows. Of course full packet capture provides the same information and more granularity, with a greater demand for data collection and analytics.
  • Threat Intelligence: Finally, you can leverage external threat data and IP reputation to pinpoint egress network traffic that may headed places you know are bad. Exfiltration now typically includes proprietary encryption, so you aren’t likely to catch the act through content analysis; instead you need to track where data is headed. You can also use threat intelligence indicators to watch for specific new attacks in your environment, as we have discussed ad nauseum in our threat intelligence and security monitoring research.

The key to using this data to find advanced attacks is to establish a profile of what’s normal within your environment, and then look for anomalous activity. We know anomaly detection has been under discussion in security circles for decades, but it is still one of the top ways to figure out when attackers are doing their thing in your environment. Of course keeping your baseline current and minimizing false positives are keys to making a SIEM useful for this use case. That requires ongoing effort and tuning. Of course no security monitoring tool just works – so go in with your eyes open regarding the amount of work required.

Multiple data points

Speaking of minimizing false positives, how can you do that? More SIEM projects fail due to alert exhaustion than for any other reason, so don’t rely on any single data point to produce a verdict that an alert is legitimate and demands investigation. Reduction of false positives is even more critical because of the skills gap which continues to flummox security professionals. Using a SIEM you can link together seemingly disconnected data sources to validate alerts and make sure the alarm is sounded only when it should be.

But what does that look like in practice? You need to make sure a variety of conditions are matched before an alert fires. And increase the urgency of an alert according to the number of conditions triggered. This simplified example illustrates what you can do with the SIEM you likely already have.

  1. Look for device changes: If a device suddenly registers a bunch of new system files installed, and you aren’t in the middle of a patch cycle, there may be something going on. Is that enough to pull the alarm? Probably not yet.
  2. Track identity: Next you’ll see a bunch of new accounts appear on the device, and then the domain controller targeted for compromise. Once the domain controller falls, it’s pretty much game over, because the adversary can then set up new accounts and change entitlements; so tracking the identify infrastructure is essential.
  3. Look for internal reconnaissance: Finally you’ll see the compromised device scanning everything else on the network, both so the attacker can gain his/her bearings, and also for additional devices to compromise. Traffic on internal network segments should be pretty predictable, so variations from typical traffic flows usually indicate something funky.

But do any of these data points alone indicate an attack? Probably not. But if you see multiple indicators at the same time, odds are that’s not great for you.

Modern SIEMs come with a variety of rules or policies to look for common attack patterns. They are helpful for getting started, and increasing use of data analytics will help you refine your thresholds for alerts, increasing accuracy and reducing false alarms.

Application Stack Attacks

We alluded to this above, but to us an “application stack attack” is not just a cute rhyme, but how a sophisticated adversary takes advantage of weaknesses within an application or another part of an application stack, to gain a foothold in your environment to access data of interest. There are a number of application stack data sources you can pump into a SIEM to look for attacks on the application. These include:

  • Machine Data: The first step in monitoring applications is to instrument it to generate “machine data”. This could be information on different transaction types or login failures, search activity, or almost anything that can be compromised by an attacker. Determining how and where to instrument an application involves threat modeling the application to make sure the necessary hooks are built into the app. The good news is that as more and more applications move to SaaS environments, a lot of this instrumentation is there from the start. But with SaaS you get what you get, and don’t have much influence on which information is available.
  • APIs: Applications are increasingly composed of a variety of components, residing in a variety of different places (both inside and outside your environment), so watching API traffic has become key. We have researched API security, so refer back to that paper for specifics about authentication and authorizing specific API calls. You will want to track API usage and activity to profile normal activity for the application, and then start looking for anomalies.
  • Database Tier: This last part of the application stack is where the valuable stuff lives. Once an attacker has presence in the database tier, it is usually trivial to access other database tables and reach the stuff they are looking for. So ingest any database activity logs or monitors available, and watch for triggers.

Each application is unique (like a snowflake!) so you won’t be able to get prebuilt rules and policies from your SIEM provider. You need to look at each application to monitor and profile it, building rules and tuning thresholds for the specific application. This is why most organizations don’t monitor their applications to any significant degree… And also why they miss attacks which don’t involve traditional malware or obvious attack patterns.

Developer Resistance

Collecting sufficient machine data from applications isn’t something most developers are excited about. Applications have historically not been built with instrumentation in mind, and retrofitting instrumentation into an app is more delicate plumbing than designing cool new features. We all know how much developers love to update plumbing. You may need to call for senior management air cover, in the form of a mandate, to get the instrumentation you need into the application. You can only request air support a limited number of times, so make sure the application is sufficiently important first.

More good news: as new applications are deployed using modern development techniques (including DevOps and Continuous Deployment), security is increasingly being built into the stack at a fundamental level. Once the right instrumentation is in the stack, you can stop fighting to retrofit it.

Purpose-built Tools

You are likely to be approached by a variety of new security companies, offering security analytics products better at finding advanced attacks. Do you need yet another tool? Shouldn’t you be able to do this within your SIEM?

The answer is: it depends. Analytics platforms built around a specific use case, like APT or the insider threat, are optimized for a very specific problem. The vendor should know what data is required, where to get it, and how to tune their analytics engine to solve the specific problem. A more general-purpose SIEM cannot be as tuned to solve that specific problem. Your vendor can certainly provide some guidance, and maybe even some pre-packaged correlation rules, but more work will still be required to configure the use case and tune the tool.

On the other hand a security analytics platform is not designed around a SIEM’s other uses. It cannot help you prepare for an audit by generating reports pertinent to the assessment. It won’t offer much in the way of forensics and investigation. These analytics tools just weren’t built to do that, so you’ll still need your SIEM – which means you’ll have two (or more) products for security monitoring; with all the associated purchase, maintenance, and operational costs.

Now that you understand a bit more about how to use a SIEM to address advanced use cases, you need to be able to use your newfound SIEM Kung Fu consistently and systematically. So it’s time to revisit your process in order to factor in the requirements for these advanced use cases. We’ll discuss that in our next post.

—Mike Rothman

Monday, February 29, 2016

Incite 2/29/2016: Leap Day

By Mike Rothman

Today is leap day, the last day of February in a leap year. That means the month of February has 29 days. It happens once every 4 years. I have one friend (who I know of) with a birthday on Leap Day. That must have been cool. You feel very special every four years. And you just jump on the Feb 28 bandwagon to celebrate your birthday in non-leap years. Win/win.

The idea of a four-year cycle made me curious. What was I doing during leap day in 2012? Turns out I was doing the same thing I’ll be doing today – running between meetings at the RSA Conference. This year, leap day is on Monday, and that’s the day I usually spend at the America’s Growth Capital Conference, networking with CEOs and investors. It’s a great way to take the temperature of the money side of the security industry. And I love to moderate the panels, facilitating debate between leaders of the security industry. Maybe I’ll even interject an opinion or two during the event. That’s been known to happen.

leap day

Then I started looking back at my other calendar entries for 2012. The boy was playing baseball. Wow, that seems like a long time ago since it seems like forever he’s been playing lacrosse. The girls were dancing, and they had weekend practices getting ready for their June Disney trip. XX1 was getting ready for her middle school orientation. Now she’s in high school. The 4 years represent less than 10% of my life. But a full third of the twins’ existence. That’s a strange thought.

And have I made progress professionally? I think so. Our business has grown. We’ll have probably three times the number of people at the Disaster Recovery Breakfast, if that’s any measure of success. The cloud security work we do barely provided beer money in 2012, and now it’s the future of Securosis. I’ve deepened relationships with some clients and stopped working with others. Many of my friends have moved to different gigs. But overall I’m happy with my professional progress.

Personally I’m a fundamentally different person. I have described a lot of my transformation here in the Incite, or at least its results. I view the world differently now. I was figuring out which mindfulness practices worked for me back in 2012. That was also the beginning of a multi-year process to evaluate who I was and what changes I needed for the next phase of my life. Over the past four years, I have done a lot of work personally and made those changes. I couldn’t be happier with the trajectory of my life right now.

So this week I’m going to celebrate with many close friends. Security is what I do, and this week is one of the times we assemble en masse. What’s not to love? Even cooler is that I have no idea what I’ll be writing about in 2020.

My future is unwritten, and that’s very exciting. I do know that by the next time a leap year comes along, XX1 will be midway through college. The twins will be driving (oy, my insurance bill!). And in all likelihood, I’ll be at the RSA Conference hanging out with my friends at the W, waiting patiently for a drink. Most things change, but some stuff stays the same. And there is comfort in that.


Photo credit: “60:366” from chrisjtse

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), along with deep dives into cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the post or download the guide directly (PDF).

It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. Phisherman’s dream: Brian Krebs has written a lot about small and mid-sized companies being targets for scammers over the last couple years, with both significant financial losses directly from fraud, and indirectly from the ensuing court battles about who ends up paying the bill. Through friends and family, we have been hearing a lot more about this in relation to real estate transactions, captured in recent article from the Arizona Association of Realtors Hackers Perpetuate Wire Transfer Fraud Scams. Hacking the buyers, mortgage brokers, and title companies, scammers are able to both propel a transaction forward through fake authorizations, and direct funds to the wrong accounts. And once one party is compromised it’s fairly easy to get the other parties too, meaning much of the process can be orchestrated remotely. What’s particularly insidious is that these attacks naturally lead all parties into making major security misjudgments. You trust the emails because they look like they are coming from people you are waiting to hear from, with content you want to see. The result is large sums of money willingly transferred to the wrong accounts; with buyers, sellers, agents, banks, and mortgage brokers all fighting to clean up the mess. – AL

  2. EMET and the reality of software: This recent story about a defect in Microsoft’s EMET which allows attackers to basically turn it off, presents an opportunity to highlight a number of things. First, all software has bugs. Period. This bug, found by the folks at FireEye, turns EMET against itself. It’s code. It’s complicated. And that means there will be issues. No software is secure. Even the stuff that’s supposed to secure us. But EMET is awesome and free. So use it. The other big takeaway from this is the importance of timely patching. Microsoft fixed this issue last Patch Tuesday, Feb 2. It’s critical to keep devices up to date. I know it’s hard and you have a lot of devices. Do it anyway. It’s one of the best ways to reduce attack surface. – MR

  3. My list: On the Veracode blog Jeff Cratty explains to security pros the 5 things I need from you. Discussions like this are really helpful for security people trying to work with developers. Understanding the challenges and priorities each side faces every day makes working together a hell of a lot easier. Empathy FTW. I like Jeff’s list, but I could narrow down mine to two things. First, get me the “air cover” I need to prioritize security over features. Without empowerment by senior management, security issues will never get worked on. DevOps and continuous integration has been great in this regard as teams – for the first time ever – prioritize infrastructure over features, but someone needs to help get security onto the queue. Second, tell me the threats I should really worry about, and get me a list of suitable responses so I can choose what is best for our application stack and deployment model. There are usually many ways to address a specific risk, and I want options, not mandates. – AL

  4. Cutting through the fog of endpoint security marketing: If you are considering updating your endpoint protection (as you should be), Lenny Zeltser offers a great post on questions to ask an endpoint security startup. It’s basically a primer to make any new generation endpoint security player educate you on why and how they are different. They’ll say, “we use math,” like that’s novel. Or “we leverage the cloud” – ho hum. Maybe they’ll drop “deep forensics” nonsense on you. Not that any of those things are false. But it’s really about understanding how they are different. Not just from traditional endpoint protection, but also from the dozens of other new endpoint security players. Great job, Lenny. It’s hard to separate marketing fiction from fact in early markets. Ask these questions to start figuring it out. And make sure your BS detector is working – you’ll need it. – MR

—Mike Rothman