Login  |  Register  |  Contact
Thursday, March 24, 2016

Incite 3/23/2016: The Madness

By Mike Rothman

I’m not sure why I do it, but every year I fill out brackets for the annual NCAA Men’s College basketball tournament. Over all the years I have been doing brackets, I won once. And it wasn’t a huge pool. It was a small pool in my office, when I used to work in an office, so the winnings probably didn’t even amount to a decent dinner at Fuddrucker’s. I won’t add up all my spending or compare against my winning, because I don’t need a PhD in Math to determine that I am way below the waterline.

Like anyone who always questions everything, I should be asking myself why I continue to play. I’m not going to win – I don’t even follow NCAA basketball. I’d have better luck throwing darts at the wall. So clearly it’s not a money-making endeavor.

extra large bracket

I guess I could ask the same question about why I sit in front of a Wheel of Fortune slot machine in a casino. Or why I buy PowerBall tickets when the pot goes above $200MM. I understand statistics – I know I’m not going to win slots (over time) or the lottery (ever).

They call the NCAA tournament March Madness – perhaps because most people get mad when their brackets blow up on the second day of the tournament when the team they picked to win it all loses to a 15 seed. Or does that just happen to me? But I wasn’t mad. I laughed because 25% of all brackets had Michigan State winning the tournament. And they were all as busted as mine.

These are rhetorical questions. I play a few NCAA tournament brackets every year because it’s fun. I get to talk smack to college buddies about their idiotic picks. I play the slots because my heart races when I spin the wheel and see if I got 35 points or 1,000. I play the lottery because it gives me a chance to dream. What would I do with $200MM?

I’d do the same thing I’m doing now. I’d write. I’d sit in Starbucks, drink coffee, and people-watch, while pretending to write. I’d speak in front of crowds. I’d explore and travel with my loved ones. I’d still play the brackets, because any excuse to talk smack to my buddies is worth the minimal donation. And I’d still play the lottery. And no, I’m not certifiable. I just know from statistics that I wouldn’t have any less chance to win again just because I won before. Score 1 for Math.


Photo credit: “Now, that is a bracket!” from frankieleon

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Shadow Devices

Building a Vendor IT Risk Management Program

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. Enough already: Encryption is a safeguard for data. It helps ensure data is used the way its owner intends. We work with a lot of firms – helping them protect data from rogue employees, hackers, malicious government entities, and whoever else may want to misuse their data. We try to avoid touching political topics on this blog, but the current attempt by US Government agencies to paint encryption as a terrorist tool is beyond absurd. They are effectively saying security is a danger, and that has really struck a nerve in the security community. Forget for a minute that the NSA already has all the data that moves on and off your cellphone, and that law enforcement already has the means to access the contents of iPhones without Apple’s assistance. And avoid wallowing in counter-examples where encryption aided freedom, or illustrations of misuse of power to inspire fear in the opposite direction. These arguments devolve into pig-wrestling – only the pig enjoys that sort of thing. As Rich explained in Do We Have a Right To Security?, this is a simple question of whether anyone (companies or individuals) can have security. Currently the US government (at least the executive branch) says ‘No!’ – as does the UK government. – AL

  2. The US blinks… Following up Adrian’s rant above, the US government decided after all that they may not need Apple to open the San Bernadino iPhone after all. Evidently a third party would be happy to sell the US government either an exploit or another means to get access to the locked phone. Duh. Like we didn’t already know that was possible. As many of us argued, this case was much more about establishing a precedent for the FBI than about accessing that specific phone. Now that it looks like an uphill climb to win that motion, it’s time to save face and do what they should have done in the first place. Pay someone to break the phone, if they think it’s that important. We have huge respect for law enforcement and what they do, but we could do with less grandstanding and backdoors. Backdoors are stupid. – MR

  3. Hindsight is 20/20: In the Beretta files goes the case of a Ryan Collins, who was behind the attacks on celebrity iPhones. This is the attacker who stole the pictures. It’s not clear how much he made by selling them, but it was probably not worth the felony violation he will plead to or the associated jail time. They are still looking for the person who actually posted the pictures. But that guy is even dumber – he didn’t make any money, apparently because content wants to be free. All I have to say is: idiots. – MR

  4. Getting chippy: Better than 75% of stores I go into still have tape over their EMV chipped card slots on payment terminals. While it seems merchants are tardy in getting their work done, it’s not always that they are dragging their feet – it may also be the card networks. It appears some merchants who are actively processing EMV cards are getting charged for fraud and chargeback fees because they have yet to complete a certification audit by the card networks. To reverse these charges that supermarket chain filed suit, and is pushing for quick certification. The suit may halt the “liability shift” entirely, which has gotten the card brands’ attention. This entire game of “Pass The Liability” will continue to entertain us until we stop passing credit card numbers around. – AL

  5. Security faith healers: Adam Shostack posted an interesting piece at Dark Reading about how the concepts in The Gluten Lie apply to security. In a nutshell, the health industry has vilified gluten, and besides the people who have legitimate celiac disease, the data doesn’t seem to support the general position that gluten is bad. Adam makes the analogy that telling people to be secure isn’t going to help. Nor is telling them not to do things (like surf pr0n). And folks should drop the fear-based marketing. Yeah, right. A lot of technology marketing is selling snake oil, and it’s as bad in security as anywhere else. But as long as a tactic works (including vilifying gluten to sell more gluten-free stuff) free market economics say that that tactic will continue to be used. Go figure. – MR

—Mike Rothman

Wednesday, March 23, 2016

Shadow Devices: The Exponentially Expanding Attack Surface [New Series]

By Mike Rothman

One of the challenges of being security professionals for decades is that we actually remember the olden days. You remember, when Internet-connected devices were PCs; then we got fancy and started issuing laptops. That’s what was connected to our networks. If you recall, life was simpler then. But we don’t have much time for nostalgia. We are too busy getting a handle on the explosion of devices connected to our networks, accessing our data.

Here is just a smattering of what we see:

  • Mobile devices: Supporting smartphones and tablets seems like old news, mostly because you can’t remember a time when they weren’t on your network. But despite their short history, their impact on mobile networking and security cannot be understated. What’s more challenging is how these devices can connect directly to the cellular data network, which gives them a path around your security controls.
  • BYOD: Then someone decided it would be cheaper to have employees use their own devices, and Bring Your Own Device (BYOD) became a thing. You can have employees sign paperwork giving you the ability to control their devices and install software, but in practice they get (justifiably) very cranky when they cannot do something on their personal devices. So balancing the need to protect corporate data against antagonizing employees has been challenging.
  • Other office devices: Printers and scanners have been networked for years. But as more sophisticated imaging devices emerged, we realized their on-board computers and storage were insecure. They became targets, attacker beachheads.
  • Physical security devices: The new generation of physical security devices (cameras, access card readers, etc.) is largely network connected. It’s great that you can grant access to a locked-out employee, from your iPhone on the golf course, but much less fun when attackers grant themselves access.
  • Control systems and manufacturing equipment: The connected revolution has made its way to shop floors and facilities areas as well. Whether it’s a sensor collecting information from factory robots or warehousing systems, these devices are networked too, so they can be attacked. You may have heard of StuxNet targeting centrifuge control systems. Yep, that’s what we’re talking about.
  • Healthcare devices: If you go into any healthcare facility nowadays, monitoring devices and even some treatment devices are managed through network connections. There are jokes to be made about taking over shop floor robots and who cares. But if medical devices are attacked, the ramifications are significantly more severe.
  • Connected home: Whether it’s a thermostat, security system, or home automation platform – the expectation is that you will manage it from wherever you are. That means a network connection and access to the Intertubes. What could possibly go wrong?
  • Cars: Automobiles can now use either your smartphone connection or their own cellular link to connect to the Internet for traffic, music, news, and other services. They can transmit diagnostic information as well. All cool and shiny, but recent stunt hacking has proven a moving automobile can be attacked and controlled remotely. Again, what’s to worry?

There will be billions of devices connected to the Internet over the next few years. They all present attack surface. And you cannot fully know what is exploitable in your environment, because you don’t know about all your devices.

The industry wants to dump all these devices into a generic Internet of Things (IoT) bucket because IoT is the buzzword du jour. The latest Chicken Little poised to bring down the sky. It turns out the sky has already fallen – networks are already too vast to fully protect. The problem is getting worse by the day as pretty much anything with a chip in it gets networked. So instead of a manageable environment, you need to protect Everything Internet.

Anything with a network address can be attacked. Fortunately better fundamental architectures (especially for mobile devices) make it harder to compromise new devices than traditional PCs (whew!), but sophisticated attackers don’t seem to have trouble compromising any device they can reach. And that says nothing of devices whose vendors have paid little or no attention to security to date. Healthcare and control system vendors, we’re looking at you! They have porous defenses, if any, and once an attacker gains presence on the network, they have a bridgehead to work their way to their real targets.

In the Shadows

So what? You don’t even have medical devices or control systems – why would you care? The sad fact is that what you don’t see can hurt you. Your entire security program has been built to protect what you can see with traditional discovery and scanning technologies. The industry has maintained a very limited concept of what you should be looking for – largely because that’s all security scanners could see. The current state of affairs is you run scans every so often and see new devices emerge. You test them for configuration issues and vulnerabilities, and then you add those issues to the end of an endless list of things you’ll never have time to finish with.

Unfortunately visible devices are only a portion of the network-connected devices in your environment. There are hundreds if not thousands or more other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea about their security posture. Each of thm can be attacked, and may provide an adversary a presence in your environment. Your attack surface is much larger than you thought.

These shadow devices are infrequently discussed, and rarely factored into discovery and protection programs. It’s a big Don’t Ask, Don’t Tell approach, which never seems to work out well in the end.

We haven’t yet published anything on IoT devices (or Everything Internet), but it’s time. Not because we currently see many attacks in the wild. But most organizations we talk to are unprepared for when an attack happens, so they will scramble – as usual. We have espoused a visibility, then control approach to security for over a decade. Now it’s time to get a handle on the visibility of all devices on your network, so when you need to, you will know what you have to control. And how to control it.

So our newest series, entitled “Shining a Light on Shadow Devices,” will focus on the risks these devices present to organizations, and how to find them within your environment. We’d like to thank ForeScout Technologies for tentatively agreeing to license the content and, as always, we’ll write this series using our Totally Transparent Research methodology. This ensures you’ll see all of the research as it comes off the assembly line, and you can keep us honest by letting us know if we’re hitting the mark.

Our next post will dig into how these device classes can be attacked.

—Mike Rothman

Friday, March 18, 2016

Summary: Who pays who?

By Adrian Lane

Adrian here…

Apple buying space on Google’s cloud made news this week, as many people were surprised that Apple relies on others to provide cloud services, but they have been leveraging AWS and others for years. Our internal chat was alive with discussion about build vs. buy for different providers of cloud services. Perhaps a hundred or so companies have the scale to make a go at building from scratch at this point, and the odds of success for many of those are small. You need massive scale before the costs make it worth building your own. Especially the custom engineering required to get equivalent hardware margins. That leave a handful of firms who can make a go of this, and it’s still not always clear whether they should. Even Apple buys others’ services, and it usually makes good economic sense.

We did not really talk about RSA conference highlights, but the Rugged DevOps event (slides are up) was the highlight of RSAC week for me. The presentations were all thought-provoking. Concepts which were consistently reinforced included:

  • Constantly test, constantly improve
  • Without data you’re just another person with an opinion
  • Don’t update; dispose and improve
  • Micro-services and Docker containers are the basic building blocks for application development today

Micro-services make sense to me, and I have successfully used that design concept, but I have zero practical experience with Docker. Which is a shocker because it’s freakin’ everywhere, but I have never yet taken the time to learn. That stops this week. AWS and Azure both support it, and it’s embedded into Big Data frameworks as well, so it’s everywhere I want to be. I saw two vendor presentation on security concerns around Docker deployment models, and yeah, it scares me a bit. But Docker addresses the basic demand for easy updates, packaging, and accelerating deployment, so it stays. Security will iterate improvements to the model over time, as we usually do. DevOps doesn’t fix everything. That’s not me being a security curmudgeon – it’s me being excited by new technologies that let me get work done faster.

—Adrian Lane

Thursday, March 17, 2016

Building a Vendor IT Risk Management Program: Program Structure

By Mike Rothman

As we started exploring when we began Building a Vendor IT Risk Management Program, modern integrated business processes have dramatically expanded the attack surface of pretty much every organization. You can no longer ignore the risk presented by vendors or other business partners, even without regulatory bodies pushing for formal risk management of vendors and third parties. As security program fanatics we figure it’s time to start documenting such a program.

Defining a Program

First we have never really defined what we mean by a security program. Our bad. So let’s get that down, and then we can tailor it to vendor IT risk management. The first thing a program needs is to be systematic, which means you don’t do things willy-nilly. You plan the work and then work the plan. The processes involved in the program need to be predictable and repeatable. Well, as predictable as anything in security can be. Here are some other hallmarks of a program:

  • Executive Sponsorship: Our research shows a program has a much higher chance of success if there is an executive (not the CISO) who feels accountable for its success. Inevitably security involves changing processes, and maybe not doing things business or other IT groups want because of excessive risk. Without empowerment to make those decisions and have them stick, most security programs die on the vine. A senior sponsor can break down walls and push through tough decisions, making the difference between success and failure.

  • Funding: Regardless of which aspect of security you are trying to systematize, it costs money. This contributes to another key reason programs fail: lack of resources. We also see a lot of organizations kickstart new programs by just throwing new responsibilities at existing employees, with no additional compensation or backfill for their otherwise overflowing plates. That’s not sustainable, so a key aspect of program establishment is allocating money to the initiative.

  • Governance: Who is responsible for operation of the program? Who makes decisions when it needs to evolve? What is the escalation path when someone doesn’t play nice or meet agreed-upon responsibilities? Without proper definition of responsibilities, and sufficient documentation so revisionist history isn’t a factor, the program won’t be sustainable. These roles need to be defined when the program is being formally established, because it’s much easier to make these decisions and get everyone on board before it goes live. If it does not go well people will runn for cover, and if the program is a success everyone will want credit.

  • Operations: This will vary greatly between different kinds of programs, but you need to define how you will achieve your program goals. This is the ‘how’ of the program, and don’t forget about an ongoing feedback and improvement loop so the program continues to evolve.

  • Success criteria: In security this can be a bit slippery, but it’s hard to claim success without everyone agreeing what success means. Spend some time during program establishment to focus on applicable metrics, and be clear about what success looks like. Of course you can change your definition once you get going and learn what is realistic and necessary, but if you fail to establish it up front, you will have a hard time showing value.

  • Integration points: No program stands alone, so there will be integration points with other groups or functions within the organization. Maybe you need data feeds from the security monitoring group, or entitlements from the identity group. Maybe your program defines actions required from other groups. If the ultimate success of your program depends on other teams or functions within the organization (and it does, because security doesn’t stand alone), then making sure everyone is crystal clear about integration points and responsibilities from the beginning is critical.

The V(IT)RM Program

To tailor the generic structure above to vendor IT risk management you need to go through the list, make some decisions, and get everyone on board. Sounds easy, right? Not so much, but doing this kind of work now will save you from buying Tums by the case as your program goes operational.

We cannot going to tell you exactly what governance and accountability needs to look like for your program because that is heavily dependent on your culture and organization. Just make sure someone is accountable, and operational responsibilities are defined. In some cases this kind of program resides within a business unit managing vendor relationships, other times it’s within a central risk management group, or it could be somewhere else. You need to figure out what will work in your environment.

One thing to pay close attention to, particularly for risk management, is contracts. You enter business agreements with vendors every day, so make sure the contract language reflects your program objectives. If you want to scan vendor environments for vulnerabilities, that needs to be in your contracts. If you want them to do an extensive self-survey or provide a data center tour, that needs to be there. If your contracts don’t include this kind of language, look at adding an addendum or forcing a contract overhaul at some point. That’s a decision for the business people running your vendors.

  • Defining Vendor Risk: The first key requirement of a vendor risk management program is actually defining categories in which to group your vendors. We will dig into this in our next post, but these categories define the basis for your operation of the entire program. You will need to categorize both vendors and the risks they present so you know what actions to take, depending on the importance of the vendor and the type of risk.

  • Operations: How will you evaluate the risk posed by each vendor? Where will you get the information and how will you analyze it? Do you reward organizations for top-tier security? What happens when a vendor is a flaming pile of IT security failure? Will you just talk to them and inform them of the issues? Will you lock them out of your systems? It will be controversial if you take a vendor off-line, so you need to have had all these discussions with all your stakeholders before any action takes place. Which is why we constantly beat the drum for documentation and consensus when establishing a program.

  • Success Criteria/Metrics: There is of course only one metric that is truly important, and that’s whether a breach resulted from a vendor connection. OK, maybe that’s a bit overstated, but that is what the Board of Directors will focus on. Success likely means no breaches due to vendor exposure. Operationally you can set metrics around the number of vendors assessed (100% may not be practical if you have thousands of vendors), or perhaps how many vendors are in each category, and what is the direction of the trend? There is only so much you can do to impact the security posture of your vendors, but you can certainly take action to protect yourself if a vendor is deemed to pose an unacceptable risk.

  • Tuning: In a V(IT)RM program, the most critical information are the categories of importance and risk. So when tuning the program over time, you want to know how many of your vendors were breached and whether any of those breaches resulted in loss to you. If there was a breach, did you identify the risk ahead of time – basically having a good idea that vendor would have an issue? Or was this a surprise? The objectives of tuning are to eliminate surprises and wasted effort.

Of course many aspects of the program, if not all, change over time as technology improves and requirements evolve. That is to be expected, and part of the program has to be a specific set of activities focused around gathering feedback and tuning the program as described above. We also believe strongly that programs need to be documented (yes, written down), so if (or should we say when) something goes south you have documentation that someone else understood the potential issues. Even if you write it in pencil, write it and make sure all of the stakeholders understand what they agreed to.

Our next post will dig into the risk and importance categories, and how to gather that kind of information without totally relying on vendor self-reporting.

—Mike Rothman

Wednesday, March 16, 2016

Firestarter: The Rugged vs. SecDevOps Smackdown

By Rich

After a short review of the RSA Security Conference, Rich, Mike, and Adrian debate the value of using labels like “Rugged DevOps” or “SecDevOps”. Rich sees them as different, Mike wonders if we really need them, and Adrian has been tracking their reception on the developer side of the house. Okay, it’s pathetic as smackdowns go, but you wouldn’t have read this far if we didn’t give it an interesting title.

Watch or listen:


Tuesday, March 15, 2016

Building a Vendor IT Risk Management Program: Understanding Vendor IT Risk

By Mike Rothman

Outsourcing is nothing new. Industries have been embracing service providers for functions they either couldn’t or didn’t want to perform for years. This necessarily involved integrating business systems and providing these third-party vendors with access to corporate networks and computer systems. The risk was generally deemed manageable and rationalized by the business need for those integrated processes. Until it wasn’t.

The post-mortem on a recent very high-profile data breach indicated the adversary got into the retailer’s network, not through their own systems, but instead through a trusted connection with a third-party vendor. Basically the attacker owned a small service provider, and used that connection to gain a foothold within the real target’s environment. The path of least resistance into your environment may no longer be through your front door. It might be through a back door (or window) you left open for a trading partner.

Business will continue to take place, and you will need to provide access to third parties. Saying ‘no’ is not an option. But you can no longer just ignore the risks vendors present. They dramatically expand your attack surface, which now includes the environments of all the third parties with access to your systems. Ugh.

This could be thousands of different vendors. No, we aren’t forgetting that most of you don’t have the skills or resources to stay on top of your own technology infrastructure – not to mention critical data moving to cloud resources. Now you also need to worry about all those other organizations you can neither control nor effectively influence. Horrifying.

This is when you expect Tom Cruise to show up, because this sounds like the plot to the latest Mission: Impossible sequel. But unfortunately this is your lot in life. Yet there is hope, because threat intelligence services can now evaluate the IT risk posed by your trading partners, without needing access to their networks.

Our new Building a Vendor Risk Management Program series we will go into why you can no longer ignore vendor risk, and how these services can actually pinpoint malicious activity on your vendors’ networks. But just having that information is (no surprise) not enough. To efficiently and effectively manage vendor risk you need a systematic program to evaluate dangers to your organization and objectively mitigate them.

We would like to thank our friends at BitSight Technologies, who have agreed to potentially license the content in this series upon completion. As always, we will write the series using our Totally Transparent Research methodology in a totally objective and balanced way.


You know something has been a problem for a while when regulators establish guidance to address the problem. Back in 2013 the regulators overseeing financial institutions in the US seemed to get religion about the need to assess and monitor vendor risk, and IT risk was a subset of the guidance they produced. Of course, as with most regulation, enforcement has been spotty and didn’t really offer a prescriptive description of what a ‘program’ consists of. It’s not like the 12 (relatively) detailed requirements you get with the PCI-DSS.

In general, the guidance covers some pretty straightforward concepts. First you should actually write down your risk management program, and then perform proper due diligence in selecting a third party. I guess you figure out what ‘proper’ means when the assessor shows up and lets you know that your approach was improper. Next you need to monitor vendors on an ongoing basis, and have contingency plans in case one screws up and you need to get out of the deal. Finally you need program oversight and documentation, so you can know your program is operational and effective. Not brain surgery, but also not very specific.

The most detail we have found comes from the OCC (Office of the Comptroller of the Currency), which recommends an assessment of each vendor’s security program in its Risk Management Guidance.

Information Security

Assess the third party’s information security program. Determine whether the third party has sufficient experience in identifying, assessing, and mitigating known and emerging threats and vulnerabilities. When technology is necessary to support service delivery, assess the third party’s infrastructure and application security programs, including the software development life cycle and results of vulnerability and penetration tests. Evaluate the third party’s ability to implement effective and sustainable corrective actions to address deficiencies discovered during testing.

No problem, right? Especially for those of you with hundreds (or even thousands) of vendors within the scope of assessment.

We’ll add our standard disclaimer here, that compliance doesn’t make you secure. It cannot make your vendors secure either. But it does give you a reason to allocate some funding to assessing your vendors and making sure you understand how they affect your attack surface and exploitability.

The Need for a Third-Party Risk Program

Our long-time readers won’t be surprised that we prescribe a program to address a security need. Managing vendor IT risk is no different. In order to achieve consistent results, and be able to answer your audit committee about vendor risk, you need a systematic approach to plan the work, and then work the plan.

Here are the key areas of the program we will dig into in this series:

  • Structuring the V(IT)RM Program: First we’ll sketch out a vendor risk management program, starting with executive sponsorship, and defining governance and policies that make sense for each type of vendor you are dealing with. In this step you will also define risk categories and establish guidelines for assigning vendors to each category.
  • Evaluating Vendor Risk: When assessing vendors you have limited information about their IT environments. This post will dig into how to balance the limitations of what vendors self-report against external information you can glean regarding their security posture and malicious activity.
  • Ongoing V(IT)R Monitoring and Communication: Once you have identified the vendors presenting the greatest risk, and taken initial action, how do you communicate your findings to vendors and internal management? This is especially important for vendor which present significant risk to your environment. Then you need to operationalize the program to systematically keep those evaluations current. Given the rapid pace of IT (and security) change, you cannot simply assume a vendor will stay at the same risk level for an extended period of time.

—Mike Rothman

Thursday, March 10, 2016

SIEM Kung Fu: Getting Started and Sustaining Value

By Mike Rothman

As we wrap up this series on SIEM Kung Fu, we have discussed SIEM Fundamentals and some advanced use cases to push your SIEM beyond its rather limited out-of-the-box capabilities. To make the technology more useful over time, you should revisit your SIEM operation process.

Many failed SIEM projects over the past 10 years have not been technology failures. More stumble over a lack of understanding of the amount of time and resources needed to get value from the SIEM in early deployments and over time, the amount of effort required to keep them current and tuned. So a large part of SIEM Kung Fu is just making sure you have the people and process in place to leverage the technology effectively and sustainably.

Getting Started

As a matter of practice you should be focused on getting quick value out of any new technology investment, and SIEM is no exception. Even if you have had the technology in place for years, it’s useful to take a fresh look at the implementation to see if you missed any low-hanging fruit that’s there for the taking. Let’s assume you already have the system up and running, are aggregating log and event sources (including things like vulnerability data and network flows), and have already implemented some out-of-the-box policies. You already have the system in place – you are just underutilizing it.


For a fresh look at SIEM we recommend you start with adversaries. We described adversary analysis in detail in the CISO’s Guide to Advanced Attackers (PDF). Start by determining who is most likely to attempt to compromise your environment. Defining a likely attacker mission. Then profile potential adversaries to determine the groups most likely to attack you. At that point you can get a feel for the most likely Tactics, Techniques, and Procedures (TTPs) for adversaries to use. This information typically comes from a threat intelligence service, although some information sharing groups can also offer technical indicators to focus on.

Armed with these indicators you engage your SIEM to search for them. This is a form of hunting, which we will detail later in this post, and you may well find evidence of active threat actors in your environment. This isn’t a great outcome for your organization, but it does prove the value of security monitoring.

At that point you can triage the alerts you have received from SIEM searches to figure out whether you are dealing with false positives or a full-blown incident. We suggest you start with the attacks of your most likely adversaries, among the millions of indicators you can search for. And odds are you’ll find lots of things, if you search for anything and everything. By initially focusing on adversaries you are restricting your search to the attack patterns most likely to be used against you.

Two Tracks

Once you have picked the low-hanging fruit from adversary analysis, focus shifts toward putting advanced use cases into a systematic process that is consistent and repeatable. Let’s break up the world into two main categories of SIEM operations to describe the different usage models: reactive and proactive.


Reactive usage of SIEM should be familiar because that’s how most security teams function. It’s the alert/triage/respond cycle. The SIEM fires an alert, your tier 1 analyst figure out whether it’s legitimate, and then you figure out how to respond – typically via escalation to tier 2. You can do a lot to refine this process as well, so even if you are reacting you can do it more efficiently. Here are a few tips:

  1. Leverage Threat Intel: As we described above under adversary analysis, and in our previous post, you can benefit from the misfortune of others by integrating threat intelligence into your SIEM searches. If you see evidence of a recent attack pattern (provided by threat intel) within your environment, you can get ahead of it. We described this in our Leveraging Threat Intel in Security Monitoring paper. Use it – it works.
  2. User Behavioral Analytics (UBA): You can also figure out the relative severity of a situation by tracking the attack to user activity. This involves monitoring activity (and establishing the baselines/profiles described in our last post) not just by device, but also aggregating data and profiling activity for individuals. For example, instead of just monitoring the CEO’s computer, tablet, and smartphone independently, you can look at all three devices to establish a broader profile of the CEO’s activity. Then if you see any of her devices acting outside that baseline, that would trigger an alert you can triage/investigate.
    1. Insider Threat: You can also optimize some of your SIEM rules around insiders. During many attacks an adversary eventually gains a foothold in your environment and becomes an insider. You can optimize your SIEM rules to look for activity specifically targeting things you know would be valuable to insiders, such as sensitive data (both structured and unstructured). UBA is also useful here because you are profiling an insider and can watch for them doing strange reconnaisance, or possibly moving an uncharacteristially large amount of data.
    2. Threat Modeling: Yes, advanced SIEM users still work through the process of looking at specific, high-value technology assets and figuring out the best ways to compromise them. This is predominately used in the “external stack attack” use case described last post. By analyzing the ways to break an application (or technology stack), SOC analysts can build SIEM rules from those attack patterns, to detect evidence an asset is being targeted.

Keep in mind that you need to consistently look at your SIEM ruleset, add new attack patterns/use cases, and prune rules that are no longer relevant. The size of your ruleset correlates to the performance and responsiveness of your SIEM, so you need to balance looking for everything (and crushing the system) against your chance of missing something.

This is a key part of the ongoing maintenance required to keep your SIEM relevant and valuable. Whether you get new rules from a threat intelligence vendor, drinking buddies, or conferences, new rules require time to refine thresholds and determine relevance to your organization. So we reiterate that SIEM is not a “set it and forget it” technology – no security analytics tool is. Anyone telling you different is selling you a bill of goods.


Before we dive into the concept of proactivity we need to spend a minute on our soapbox about the general idea of “getting ahead of the threat.” We (still) don’t believe you can get ahead of threats or detect zero-day attacks, or any other such security marketing nonsense. What you can do is shorten the window between when you are attacked and when you know about it. So that’s the main objective of SIEM Kung Fu: to shorten this window by whatever means you can.

The reactive approach is to set SIEM rules to fire alerts based on certain conditions, and then react to them. The proactive approach is to task a human with trying to find situations where attacks are happening, which wouldn’t trigger an alert. These folks like to be called hunters, which sounds much better than “SOC analyst”.

Why wouldn’t the SIEM alert have fired already? Maybe the attack is just beginning. Maybe the adversary is just doing recon and mostly hiding to evade detection. Whatever the cause, the rules you set in the SIEM haven’t triggered yet, and a skilled human may be able to find evidence of the attack before your monitoring tools.

The hunter’s tool set is more about threat intel, effective search and analytics, and a lot of instinct for what attackers will do, than a flexible SIEM rules engine. For example a hunter might see that a recent business partnership your company announced has irritated factions in Eastern Europe. So the hunter does a little research and finds a new method of compromising a recent version of Windows by gaining kernel access and then replacing system files in use by attackers in that region. Then the hunter searches to see whether egress traffic is headed to known C&C channels used by those groups, and also searches your endpoint telemetry for instances where that system file was changed recently. Of course you could set a rule to look for this activity moving forward, but the hunter is able to mine existing security data for that set of conditions to see if an attack has already happened.

As another example a hunter might go to a security conference and learn about a new technique to overflow memory. After playing around with it in your lab, the hunter knows what to look for on each endpoint. Then a search can be initiated for that activity, even though there hasn’t been evidence of that technique being used in the wild yet. Hunters have great leeway to follow their instincts, and SIEM tools need to offer flexible (and fast) enough search to find strings and pull on them.

SIEM Kung Fu for hunters is about giving them the platform to do their job. They are skilled professionals, with their own tools for when they really want to dig into a device or attack. But a SIEM can be very useful for helping them narrow their focus to devices that require more investigation and provide a means to analyze patterns of activity that could yield clues to which threats are active.

Whether you are implementing a set of SIEM rules to react to attacks or giving a set of hunters the ability to identify potential compromises in your environment, your security monitoring platform can be leveraged to enable faster detection and triage. And when you are racing an active adversary, time is not your friend.

With that, we wrap up our SIEM Kung Fu series. We will assemble these posts into a paper over the next week, so if you have any comments or feedback about the research, let us know in the comments, via the Tweeter (@securosis) or via email.

—Mike Rothman

Wednesday, March 09, 2016

Incite 3/9/2016: Star Lord

By Mike Rothman

Everything is a game nowadays. Not like Words with Friends (why yes, since you ask – I do enjoy getting my ass kicked by the women in my life) or even Madden Mobile (which the Boy plays constantly) – I’m talking about gamification. In our security world, the idea is that rank and file employees will actually pay attention to security stuff they don’t give a rat’s ass about… if you make it all into a game. So get departments to compete for who can do best in the phishing simulation. Or give a bounty to the team with the fewest device compromises due to surfing pr0n. Actually, though, it might be more fun to post the link that compromised the machine in the first place. The employee with the nastiest NSFW link would win. And get fired… But I digress.

I find that I do play these games. But not on my own device. I’m kind of obsessed with Starbucks’ loyalty program. If you accumulate 12 stars you get a free drink. It’s a great deal for me. I get a large brewed coffee most days. I don’t buy expensive lattes, and I get the same star for every drink I buy. And if I have the kids with me, I’ll perform 3 or 4 different transactions, so I can get multiple stars. When I get my reward drink, I get a 7 shot Mocha. Yes, 7 shots. I’m a lot of fun in the two hours after I drink my reward.

And then Starbucks sends out promotions. For a while, if you ordered a drink through their mobile app, you’d get an extra star. So I did. I’d sit in their store, bust open my phone, order the drink, and then walk up to the counter and get it. Win! Extra star! Sometimes they’d offer 3 extra stars if you bought a latte drink, an iced coffee, and a breakfast sandwich within a 3-day period. Well, a guy’s gotta eat, right? And I was ordering the iced coffee anyway in the summer. Win! Three bonus stars. Sometimes they’d send a request for a survey and give me a bunch of stars for filling it out. Win! I might even be honest on the survey… but probably not. As long as I get my stars, I’m good.

Yes, I’m gaming the system for my stars. And I have two reward drinks waiting for me, so evidently it’s working. I’m going to be in Starbucks anyway, and drinking coffee anyway – I might as well optimize for free drinks.

star lord

Oh crap, what the hell have I become? A star whore? Ugh. Let’s flip that perspective. I’m the Star Lord. Yes! I like that. Who wants to be Groot?

Pretty much every loyalty program gets gamed. If you travel like I do, you have done the Dec 30 or 31 mileage run to make the next level in a program. You stay in a crappy Marriott 20 miles away from your meeting, instead of the awesome hotel right next to the client’s office. Just to get the extra night. You do it. Everyone does.

And now it’s a cat and mouse game. The airlines change their programs every 2-3 years, to force customers to find new ways to optimize milage accumulation. Starbucks is changing their program to reward customers based on what they spend. The nerve of them. Now it will take twice as long to get my reward drinks. Until I figure out how to game this version of the program. And I will, because to me gaming their game is the game.


Photo credit: “Star-Lord ord” from Dex

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. An expensive lie: Many organizations don’t really take security seriously. It has never been proven that breaches cause lost business (correlation is not causation), nor have compliance penalties been sufficient to spur action. Is that changing? Maybe. You can see a small payment processor like Dwolla getting fined $100K for falsely claiming that “information is securely encrypted and stored”. Is $100K enough? Does it need to be $100MM? I don’t know, but at some point these regulations should have enough teeth taht companies start to take them seriously. But you have to wonder, if a fintech start-up isn’t “securely encrypting and storing” customer data, what the hell are they doing? – MR

  2. Payment tokens for you and me: NFC World is reporting that Visa will retire alternate PANs issued to host card emulators for mobile payments, without giving an actual EOL date. We have been unable to verify this announcement, but it’s not surprising because that specification is at odds with EMVco’s PAR tokenization approach, which we discussed last year – which is leveraged by ApplePay, SamsungPay, and others. This is pretty much the end of host card emulation and any lingering telco secure element payment schemes. What is surprising many people is the fact that, if you read Visa and Mastercard’s recent announcements, they are both positioning themselves as cloud-based security vendors – offering solutions for identity and payment in cars, wearables, and other mobile devices. Visa’s Tokenization Services, Mastercard’s tokens, and several payment wallets all leverage PAR tokens provided by various Tokenization-as-a-Service offerings. And issuing banks are buying this service as well! For security and compliance folks this is good news, because the faster this conversion happens, the faster the enterprise can get rid of credit cards. And once those are gone, so too are all the supporting security functions you need to manage. Security vendors, take note: you have new competitors in mobile device security services. – AL

  3. Well, at least the pace of tech innovation is slowing… I can do nothing but laugh at the state of security compliance. The initiative that actually provided enough detail to help lost organizations move forward, the PCI-DSS, is evidently now very mature. So mature that they don’t need another major update. Only minor updates, with long windows to implement them, because.. well, just because. These retailers are big and they move slowly. But attackers move and innovate fast. So keeping our current low bar forever seems idiotic. Attackers are getting better, so we need to keep raising the bar, and I don’t know how that will happen now. I guess it will take another wave of retailer hacks to shake things up again. Sad. – MR

  4. No need to encrypt: Future Tense captures the essence of Amazon’s removal of encryption from Fire devices: Inexpensive parts, like weak processors, would be significantly burdened when local encryption was on, and everything would slow down. This is not about bowing to federal pressure – it is cost-cutting on a money-losing device. And let’s be honest – these are not corporate devices, and no one reading this allows Amazon Fires onto their business networks. Not every mobile device deserves security hardening. Most people have a handful of devices with throw-away data, and convenience devices need very little security. The handful of people I know with Kindle or Fire devices consider them mobile infotainment systems – the only data on the device is a Gmail account, which has already been hacked, and the content they bought from Amazon. Let’s pick our battles. – AL

  5. I don’t get it, but QUANTUM! I wish I knew more about things like quantum computing, and that I had time to read papers and the like to get informed. Evidently progress is being made on new quantum computing techniques that will make current encryption obsolete. Now they have a 5-atom quantum computer. I have no idea what that even means, but it sounds cool. Is it going to happen tomorrow? Nope. I won’t be able to get a quantum computer from Amazon for a while, but the promise of these new technologies to upend the way we have always done things is useful reminder. Don’t get attached to anything. Certainly not technology, because it’s not going to be around for long. Whichever technology we’re talking about. – MR

—Mike Rothman

Tuesday, March 08, 2016

SIEM Kung Fu: Advanced Use Cases

By Mike Rothman

Given the advance of SIEM technology, the use cases described in the first post of our SIEM Kung Fu series are very achievable. But with the advent of more packaged attack kits leveraged by better organized (and funded) adversaries, and the insider threat, you need to go well beyond what comes out of the [SIEM] box, and what can be deployed during a one-week PoC, to detect real advanced attacks.

So as we dig into more advanced use cases we will tackle how to optimize your SIEM to both a) detect advanced attacks and b) track user activity, to identify possible malicious insider behavior. There is significant overlap between these two use cases. Ultimately, in almost every successful attack, the adversary gains presence on the network and therefore is technically an insider. But let’s take adversaries out of play here, because in terms of detection, whether the actor is external or internal to your organization doesn’t matter. They want to get your stuff.

So we’ll break up the advanced use cases by target. It might be the application stack directly (from the outside), to establish a direct path to the data center, without requiring any lateral movement to achieve the mission. The other path is to compromise devices (typically through an employee), escalate privileges, and move laterally to achieve the mission. Both can be detected by a properly utilized SIEM.

Attacking Employees

The most prominent attack vector we see in practice today is the advanced attack, which is also known as an APT or a kill chain, among other terms. But regardless of what you call it, this is a process which involves an employee device being compromised, and then used as a launching point to systematically move deeper within an organization – to find, access, and exfiltrate critical information. Detecting this kind of attack requires looking for anomalous behavior at a variety of levels within the environment. Fortunately employees (and their devices) should be reasonably predictable in what they do, which resources they access, and their daily traffic patterns.

In a typical device-centric attack an adversary follows a predictable lifecycle: perform reconnaissance, send an exploit to the device, and escalate privileges, then use that device as a base for more reconnaissance, more exploits, and to burrow further into the environment. We have spent a lot of time on how threat detection needs to evolve and how to catch these attacks using network-based telemetry.

Leveraging your SIEM to find these attacks is similar; it involves understanding the trail the adversary leaves, the resulting data you can analyze, and patterns to look for. An attacker’s trail is based specifically on change. During any attack the adversary changes something on the device being attacked. Whether it’s the device configuration, creating new user accounts, increasing account privileges, or just unusual traffic flows, the SIEM has access to all this data to detect attacks.

Initial usage of SIEM technology was entirely dependent on infrastructure logs, such as those from network and security devices. That made sense because SIEM was initially deployed to stem the flow of alerts streaming in from firewalls, IDS, and other network security devices. But that offered a very limited view of activity and eventually become easy for adversaries to evade. So over the past decade many additional data sources have been integrated into the SIEM to provide a much broader view of your environment.

  • Endpoint Telemetry: Endpoint detection has become very shiny in security circles. There is a ton of interest in doing forensics on endpoints, and if you are trying to figure out how the proverbial horse left the barn, endpoint telemetry is great. Another view is that devices are targeted in virtually every attack, so highly detailed data about exactly what’s happening on an endpoint is critical – not just to incident response, but also to detection. And this data (or the associated metadata) can be instrumental when watching for the kind of change that may indicate an active threat actor.
  • Identity Information: Inevitably, once an adversary has presence in your environment, they will go after your identity infrastructure, because that is usually the path of least resistance for access to valuable data. So you need access to identity stores; watch for new account creation and new privilege entitlements, which are both likely to identify attacks in process.
  • Network Flows: The next step in the attack is to move laterally within the environment, and move data around. This leaves a trail on the network that can be detected by tracking network flows. Of course full packet capture provides the same information and more granularity, with a greater demand for data collection and analytics.
  • Threat Intelligence: Finally, you can leverage external threat data and IP reputation to pinpoint egress network traffic that may headed places you know are bad. Exfiltration now typically includes proprietary encryption, so you aren’t likely to catch the act through content analysis; instead you need to track where data is headed. You can also use threat intelligence indicators to watch for specific new attacks in your environment, as we have discussed ad nauseum in our threat intelligence and security monitoring research.

The key to using this data to find advanced attacks is to establish a profile of what’s normal within your environment, and then look for anomalous activity. We know anomaly detection has been under discussion in security circles for decades, but it is still one of the top ways to figure out when attackers are doing their thing in your environment. Of course keeping your baseline current and minimizing false positives are keys to making a SIEM useful for this use case. That requires ongoing effort and tuning. Of course no security monitoring tool just works – so go in with your eyes open regarding the amount of work required.

Multiple data points

Speaking of minimizing false positives, how can you do that? More SIEM projects fail due to alert exhaustion than for any other reason, so don’t rely on any single data point to produce a verdict that an alert is legitimate and demands investigation. Reduction of false positives is even more critical because of the skills gap which continues to flummox security professionals. Using a SIEM you can link together seemingly disconnected data sources to validate alerts and make sure the alarm is sounded only when it should be.

But what does that look like in practice? You need to make sure a variety of conditions are matched before an alert fires. And increase the urgency of an alert according to the number of conditions triggered. This simplified example illustrates what you can do with the SIEM you likely already have.

  1. Look for device changes: If a device suddenly registers a bunch of new system files installed, and you aren’t in the middle of a patch cycle, there may be something going on. Is that enough to pull the alarm? Probably not yet.
  2. Track identity: Next you’ll see a bunch of new accounts appear on the device, and then the domain controller targeted for compromise. Once the domain controller falls, it’s pretty much game over, because the adversary can then set up new accounts and change entitlements; so tracking the identify infrastructure is essential.
  3. Look for internal reconnaissance: Finally you’ll see the compromised device scanning everything else on the network, both so the attacker can gain his/her bearings, and also for additional devices to compromise. Traffic on internal network segments should be pretty predictable, so variations from typical traffic flows usually indicate something funky.

But do any of these data points alone indicate an attack? Probably not. But if you see multiple indicators at the same time, odds are that’s not great for you.

Modern SIEMs come with a variety of rules or policies to look for common attack patterns. They are helpful for getting started, and increasing use of data analytics will help you refine your thresholds for alerts, increasing accuracy and reducing false alarms.

Application Stack Attacks

We alluded to this above, but to us an “application stack attack” is not just a cute rhyme, but how a sophisticated adversary takes advantage of weaknesses within an application or another part of an application stack, to gain a foothold in your environment to access data of interest. There are a number of application stack data sources you can pump into a SIEM to look for attacks on the application. These include:

  • Machine Data: The first step in monitoring applications is to instrument it to generate “machine data”. This could be information on different transaction types or login failures, search activity, or almost anything that can be compromised by an attacker. Determining how and where to instrument an application involves threat modeling the application to make sure the necessary hooks are built into the app. The good news is that as more and more applications move to SaaS environments, a lot of this instrumentation is there from the start. But with SaaS you get what you get, and don’t have much influence on which information is available.
  • APIs: Applications are increasingly composed of a variety of components, residing in a variety of different places (both inside and outside your environment), so watching API traffic has become key. We have researched API security, so refer back to that paper for specifics about authentication and authorizing specific API calls. You will want to track API usage and activity to profile normal activity for the application, and then start looking for anomalies.
  • Database Tier: This last part of the application stack is where the valuable stuff lives. Once an attacker has presence in the database tier, it is usually trivial to access other database tables and reach the stuff they are looking for. So ingest any database activity logs or monitors available, and watch for triggers.

Each application is unique (like a snowflake!) so you won’t be able to get prebuilt rules and policies from your SIEM provider. You need to look at each application to monitor and profile it, building rules and tuning thresholds for the specific application. This is why most organizations don’t monitor their applications to any significant degree… And also why they miss attacks which don’t involve traditional malware or obvious attack patterns.

Developer Resistance

Collecting sufficient machine data from applications isn’t something most developers are excited about. Applications have historically not been built with instrumentation in mind, and retrofitting instrumentation into an app is more delicate plumbing than designing cool new features. We all know how much developers love to update plumbing. You may need to call for senior management air cover, in the form of a mandate, to get the instrumentation you need into the application. You can only request air support a limited number of times, so make sure the application is sufficiently important first.

More good news: as new applications are deployed using modern development techniques (including DevOps and Continuous Deployment), security is increasingly being built into the stack at a fundamental level. Once the right instrumentation is in the stack, you can stop fighting to retrofit it.

Purpose-built Tools

You are likely to be approached by a variety of new security companies, offering security analytics products better at finding advanced attacks. Do you need yet another tool? Shouldn’t you be able to do this within your SIEM?

The answer is: it depends. Analytics platforms built around a specific use case, like APT or the insider threat, are optimized for a very specific problem. The vendor should know what data is required, where to get it, and how to tune their analytics engine to solve the specific problem. A more general-purpose SIEM cannot be as tuned to solve that specific problem. Your vendor can certainly provide some guidance, and maybe even some pre-packaged correlation rules, but more work will still be required to configure the use case and tune the tool.

On the other hand a security analytics platform is not designed around a SIEM’s other uses. It cannot help you prepare for an audit by generating reports pertinent to the assessment. It won’t offer much in the way of forensics and investigation. These analytics tools just weren’t built to do that, so you’ll still need your SIEM – which means you’ll have two (or more) products for security monitoring; with all the associated purchase, maintenance, and operational costs.

Now that you understand a bit more about how to use a SIEM to address advanced use cases, you need to be able to use your newfound SIEM Kung Fu consistently and systematically. So it’s time to revisit your process in order to factor in the requirements for these advanced use cases. We’ll discuss that in our next post.

—Mike Rothman

Monday, February 29, 2016

Incite 2/29/2016: Leap Day

By Mike Rothman

Today is leap day, the last day of February in a leap year. That means the month of February has 29 days. It happens once every 4 years. I have one friend (who I know of) with a birthday on Leap Day. That must have been cool. You feel very special every four years. And you just jump on the Feb 28 bandwagon to celebrate your birthday in non-leap years. Win/win.

The idea of a four-year cycle made me curious. What was I doing during leap day in 2012? Turns out I was doing the same thing I’ll be doing today – running between meetings at the RSA Conference. This year, leap day is on Monday, and that’s the day I usually spend at the America’s Growth Capital Conference, networking with CEOs and investors. It’s a great way to take the temperature of the money side of the security industry. And I love to moderate the panels, facilitating debate between leaders of the security industry. Maybe I’ll even interject an opinion or two during the event. That’s been known to happen.

leap day

Then I started looking back at my other calendar entries for 2012. The boy was playing baseball. Wow, that seems like a long time ago since it seems like forever he’s been playing lacrosse. The girls were dancing, and they had weekend practices getting ready for their June Disney trip. XX1 was getting ready for her middle school orientation. Now she’s in high school. The 4 years represent less than 10% of my life. But a full third of the twins’ existence. That’s a strange thought.

And have I made progress professionally? I think so. Our business has grown. We’ll have probably three times the number of people at the Disaster Recovery Breakfast, if that’s any measure of success. The cloud security work we do barely provided beer money in 2012, and now it’s the future of Securosis. I’ve deepened relationships with some clients and stopped working with others. Many of my friends have moved to different gigs. But overall I’m happy with my professional progress.

Personally I’m a fundamentally different person. I have described a lot of my transformation here in the Incite, or at least its results. I view the world differently now. I was figuring out which mindfulness practices worked for me back in 2012. That was also the beginning of a multi-year process to evaluate who I was and what changes I needed for the next phase of my life. Over the past four years, I have done a lot of work personally and made those changes. I couldn’t be happier with the trajectory of my life right now.

So this week I’m going to celebrate with many close friends. Security is what I do, and this week is one of the times we assemble en masse. What’s not to love? Even cooler is that I have no idea what I’ll be writing about in 2020.

My future is unwritten, and that’s very exciting. I do know that by the next time a leap year comes along, XX1 will be midway through college. The twins will be driving (oy, my insurance bill!). And in all likelihood, I’ll be at the RSA Conference hanging out with my friends at the W, waiting patiently for a drink. Most things change, but some stuff stays the same. And there is comfort in that.


Photo credit: “60:366” from chrisjtse

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), along with deep dives into cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the post or download the guide directly (PDF).

It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

Incite 4 U

  1. Phisherman’s dream: Brian Krebs has written a lot about small and mid-sized companies being targets for scammers over the last couple years, with both significant financial losses directly from fraud, and indirectly from the ensuing court battles about who ends up paying the bill. Through friends and family, we have been hearing a lot more about this in relation to real estate transactions, captured in recent article from the Arizona Association of Realtors Hackers Perpetuate Wire Transfer Fraud Scams. Hacking the buyers, mortgage brokers, and title companies, scammers are able to both propel a transaction forward through fake authorizations, and direct funds to the wrong accounts. And once one party is compromised it’s fairly easy to get the other parties too, meaning much of the process can be orchestrated remotely. What’s particularly insidious is that these attacks naturally lead all parties into making major security misjudgments. You trust the emails because they look like they are coming from people you are waiting to hear from, with content you want to see. The result is large sums of money willingly transferred to the wrong accounts; with buyers, sellers, agents, banks, and mortgage brokers all fighting to clean up the mess. – AL

  2. EMET and the reality of software: This recent story about a defect in Microsoft’s EMET which allows attackers to basically turn it off, presents an opportunity to highlight a number of things. First, all software has bugs. Period. This bug, found by the folks at FireEye, turns EMET against itself. It’s code. It’s complicated. And that means there will be issues. No software is secure. Even the stuff that’s supposed to secure us. But EMET is awesome and free. So use it. The other big takeaway from this is the importance of timely patching. Microsoft fixed this issue last Patch Tuesday, Feb 2. It’s critical to keep devices up to date. I know it’s hard and you have a lot of devices. Do it anyway. It’s one of the best ways to reduce attack surface. – MR

  3. My list: On the Veracode blog Jeff Cratty explains to security pros the 5 things I need from you. Discussions like this are really helpful for security people trying to work with developers. Understanding the challenges and priorities each side faces every day makes working together a hell of a lot easier. Empathy FTW. I like Jeff’s list, but I could narrow down mine to two things. First, get me the “air cover” I need to prioritize security over features. Without empowerment by senior management, security issues will never get worked on. DevOps and continuous integration has been great in this regard as teams – for the first time ever – prioritize infrastructure over features, but someone needs to help get security onto the queue. Second, tell me the threats I should really worry about, and get me a list of suitable responses so I can choose what is best for our application stack and deployment model. There are usually many ways to address a specific risk, and I want options, not mandates. – AL

  4. Cutting through the fog of endpoint security marketing: If you are considering updating your endpoint protection (as you should be), Lenny Zeltser offers a great post on questions to ask an endpoint security startup. It’s basically a primer to make any new generation endpoint security player educate you on why and how they are different. They’ll say, “we use math,” like that’s novel. Or “we leverage the cloud” – ho hum. Maybe they’ll drop “deep forensics” nonsense on you. Not that any of those things are false. But it’s really about understanding how they are different. Not just from traditional endpoint protection, but also from the dozens of other new endpoint security players. Great job, Lenny. It’s hard to separate marketing fiction from fact in early markets. Ask these questions to start figuring it out. And make sure your BS detector is working – you’ll need it. – MR

—Mike Rothman

Friday, February 26, 2016

Summary: The Cloud Horizon

By Adrian Lane

By Adrian

Two weeks ago Rich sketched out some changes to our Friday Summary, including how the content will change. But we haven’t spelled out our reasons. Our motivation is simple. In a decade, over half your systems will be in some cloud somewhere. The Summary will still be about security, but we’ll focus on security for cloud services, cloud applications, and how DevOps techniques intertwine with each. Rather than rehash on-premise security issues we have covered (ad nauseum) for 9 years, we believe it’s far more helpful to IT and security folks to discuss what is on the near horizon which they are not already familiar with. We can say with certainty that most of what you’ve learned about “the right way to do things” in security will be challenged by cloud deployments, so we are tuning the Summary to increase understanding the changes in store, and what to do about them. Trends, features, tools, and even some code. We know it’s not for everybody, but if you’re seriously interested, you can subscribe directly to the Friday Summary.

The RSA conference is next week, so don’t forget to get a copy of Securosis’s Guide to the RSA Conference. But be warned; Mike’s been at the meme generator again, and some things you just can’t unsee. Oh, and if you’re interested in attending the Eighth Annual Securosis Disaster Recovery Breakfast at RSA, please RSVP. That way we know how much bacon to order. Or Bloody Marys to make. Something like that.

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, so if you have submissions please email them to info@securosis.com.

Alerts literally drive DevOps. One may fire off a cloud-based service, or it might indicate a failure a human needs to look at. When putting together a continuous integration pipeline, or processing cloud services, how do you communicate status? SMS and email are the common output formats, and developer tools like Slack or bug tracking systems tend to be the endpoints, but it’s hard to manage and integrate the streams of automated outputs. And once you get one message of a particular event type, you usually don’t want to see that event again for a while. You can create a simple web console, or use AWS to stream to specified recipients, but that’s all manual setup. Things like Slack can help with individuals, team, and third parties, but managing them is frankly a pain in the ass. As you scale up cloud and DevOps processes it’s easy to get overwhelmed. One of the tools I was looking at this week was (x)matters, which provides an integration and management hub for automated messages. It can understand messages from multiple sources and offers aggregation to avoid over-pinging users. I have not seen many products addressing this problem, so I wanted to pass it along.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting our whole RSA Conference Guide as posts over at the RSA Conference blog – here are the latest:

Training and Events

—Adrian Lane

Thursday, February 25, 2016

Presenting the RSA Conference Guide 2016

By Mike Rothman

Apparently the RSA Conference folks failed to regain their senses after letting us have free reign last year to post our RSA Conference Guide to the conference blog. We changed the structure this year, and here is how we explained it in the introductory post of the Guide.

In previous years the RSAC-G followed a consistent format. An overview of top-level trends and themes you would see at the show, a deep dive into our coverage areas, and a breakout of what’s on the show floor. We decided to change things up this year. The conference has grown enough that our old format doesn’t make as much sense. And we are in the middle of shaking up the company, so might as well update the RSAC-G while we’re at it.

This year we’ll still highlight main themes, which often set the tone for the rest of the security presentations and marketing you see throughout the year. But instead of deep dives into our coverage areas, we are focusing on projects and problems we see many clients tackling. When you go to a conference like RSA, it isn’t really to learn about technology for technology’s sake–you are there to learn how to solve (or at least manage) particular problems and projects.

This year our deep dives are structured around the security problems and projects we see toping priority lists at most organizations. Some are old favorites, and others are just hitting the radar for some of you. We hope the new structure is a bit more practical. We want you able to pop open the Guide, find something at the top of your list, jump into that section, and know where to focus your time.

Then we take all that raw content and format it into a snazzy PDF with a ton of meme goodness. So you can pop the guide onto your device and refer to it during the show.

Without further ado, we are excited to present the entire RSA Conference Guide 2016 (PDF).

Just so you can get a taste of the meme awesomeness of the published Guide, check out this image.

forensics diaper

That’s right. We may be changing the business a bit, but we aren’t going to get more politically correct, that’s for sure. And it’s true. Most n00b responders soil their pants a bit until they get comfortable during incidents.

And in case you want to check out the posts on the RSAC blog:


Key Themes

Yes, all the key themes have a Star Wars flavor. Just because we can.

Deep Dives

—Mike Rothman

Friday, February 19, 2016

Do We Have a Right to Security?

By Rich

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even feasibility are all irrelevant.

Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities.

Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers.

Everything, all of it, boils down to a single question.

Do we have a right to security?

This isn’t the government vs. some technology companies. It’s the government vs. your right to fundamental security in the digital age.

Vendors like Apple have hit the point where some of the products they make, for us, are so secure that it is nearly impossible, if not impossible, to crack them. As a lifetime security professional, this is what my entire industry has been dreaming of since the dawn of computers. Secure commerce, secure communications, secure data storage. A foundation to finally start reducing all those data breaches, to stop China, Russia, and others from wheedling their way into our critical infrastructure. To make phones so secure they almost aren’t worth stealing, since even the parts aren’t worth much.

To build the secure foundation for the digital age that we so lack, and so desperately need. So an entire hospital isn’t held hostage because one person clicked on the wrong link.

The FBI, DOJ, and others are debating whether secure products and services should be legal. They hide this in language around warrants and lawful access, and scream about terrorists and child pornographers. What they don’t say, what they never admit, is that it is impossible to build in back doors for law enforcement without creating security vulnerabilities.

It simply can’t be done. If Apple, the government, or anyone else has master access to your device, to a service, or communications, that is a security flaw. It is impossible for them to guarantee that criminals or hostile governments won’t also gain such access. This isn’t paranoia, it’s a demonstrable fact. No company or government is completely secure.

And this completely ignores the fact that if the US government makes security illegal here, that destroys any concept of security throughout the rest of the world, especially in repressive regimes. Say goodbye to any possibility of new democracies. Never mind the consequences here at home. Access to our phones and our communications these days isn’t like reading our mail or listening to our phone calls – it’s more like listening to whispers to our partners at home. Like tracking how we express our love to our children, or fight the demons in our own minds.

The FBI wants this case to be about a single phone used by a single dead terrorist in San Bernadino to distract us from asking the real question. It will not stop at this one case – that isn’t how law works. They are also teaming with legislators to make encrypted, secure devices and services illegal. That isn’t conspiracy theory – it is the stated position of the Director of the FBI. Eventually they want systems to access any device or form of communications, at scale. As they already have with our phone system. Keep in mind that there is no way to limit this to consumer technologies, and it will have to apply to business systems as well, undermining corporate security.

So ignore all of that and ask yourself, do we have a right to security? To secure devices, communications, and services? Devices secure from criminals, foreign governments, and yes, even our own? And by extension, do we have a right to privacy? Because privacy without security is impossible.

Because that is what this fight is about, and there is no middle ground, mystery answer hiding in a research project, or compromise. I am a security expert. I have spent 25 years in public service and most definitely don’t consider myself a social activist. I am amused by conspiracy theories, but never take them seriously. But it would be unconscionable for me to remain silent when our fundamental rights are under assault by elements within our own government.


Building a Threat Intelligence Program: Gathering TI

By Mike Rothman

[Note: We received some feedback on the series that prompted us to clarify what we meant by scale and context towards the end of the post. See? We do listen to feedback on the posts. - Mike]

We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need.

Defining TI Requirements

A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you?

As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues:

  1. Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start.
  2. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research.
  3. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection).

Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them.

The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls.

When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions.


After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable.

When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future.

Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI.

How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost.

As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know.

While we are discussing money, this is a good point to start thinking about how to quantify the value of your TI investment. You defined your requirements, so within each use case how will you substantiate value? Is it about the number of attacks you block based on the data? Or perhaps an estimate of how adversary dwell time decreased once you were able to search for activity based on TI indicators. It’s never too early to start defining success criteria, deciding how to quantify success, and ensuring you have adequate metrics to substantiate achievements. This is a key topic, which we will dig into later in this series.

Selecting Data Sources

Next you start to gather data to help you identify and detect the activity of potential adversaries in your environment. You can get effective threat intelligence from a variety of different sources. We divide security monitoring feeds into five high-level categories:

  • Compromised Devices: This data source provides external notification that a device is acting suspiciously by communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators to search for within your environment, as Malware Analysis Quant described in gory detail.
  • IP Reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses. IP reputation has evolved since its introduction, now featuring scores to compare the relative maliciousness of different addresses, as well as factoring in additional context such as Tor nodes/anonymous proxies, geolocation, and device ID to further refine reputation.
  • Command and Control Networks: One specialized type of reputation often packaged as a separate feed is intelligence on command and control (C&C) networks. These feeds track global C&C traffic and pinpoint malware originators, botnet controllers, and other IP addresses and sites you should look for as you monitor your environment.
  • Phishing Messages: Most advanced attacks seem to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically use email as the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and tactics.

These security data types are available in a variety of packages. Here are the main categories:

  • Commercial integrated: Every security vendor seems to have a research group providing some type of intelligence. This data is usually very tightly integrated into their product or service. Sometimes there is a separate charge for the intelligence, and other times it is bundled into the product or service.
  • Commercial standalone: We see an emerging security market for standalone threat intel. These vendors typically offer an aggregation platform to collect external data and integrate into controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster around specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data for an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally open source intel encompasses a variety of publicly available sources for things like malware samples and IP reputation, which can be integrated directly into other systems.

The best way to figure out which data sources are useful is to actually use them. Yes, that means a proof of concept for the services. You can’t look at all the data sources, but pick a handful and start looking through the feeds. Perhaps integrate data into your monitors (SIEM and IPS) in alert-only mode, and see what you’d block or alert on, to get a feel for its value. Is the interface one you can use effectively? Does it take professional services to integrate the feed into your environment? Does a TI platform provide enough value to look at it every day, in addition to the 5-10 other consoles you need to deal with? These are all questions you should be able to answer before you write a check.

Company-specific Intelligence

Many early threat intelligence services focused on general security data, identifying malware indicators and tracking malicious sites. But how does that apply to your environment? That is where the TI business is going. Both providing more context for generic data, and applying it to your environment (typically through a Threat Intel Platform), as well as having researchers focus specifically on your organization.

This company-specific information comes in a few flavors, including:

  • Brand protection: Misuse of a company’s brand can be very damaging. So proactively looking for unauthorized brand uses (like on a phishing site) or negative comments in social media fora can help shorten the window between negative information appearing and getting it taken down.
  • Attacker networks: Sometimes your internal detection capabilities fail, so you have compromised devices you don’t know about. These services mine command and control networks to look for your devices. Obviously it’s late if you find your device actively participating in these networks, but better find it before your payment processor or law enforcement tells you you have a problem.
  • Third party risk: Another type of interesting information is about business partners. This isn’t necessarily direct risk, but knowing that you connect to networks with security problems can tip you to implement additional controls on those connections, or more aggressively monitor data exchanges with that partner.

The more context you can derive from the TI, the better. For example, if you’re part of a highly targeted industry, information about attacks in your industry can be particularly useful. It’s also great to have a service provider proactively look for your data in external forums, and watch for indications that your devices are part of attacker networks. But this context will come at a cost; you will need to evaluate the additional expense of custom threat information and your own ability to act on it. This is a key important consideration. Additional context is useful if your security program and staff can take advantage of it.

Managing Overlap

If you use multiple threat intelligence sources you will want to make sure you don’t get duplicate alerts. Key to determining overlap is understanding how each intelligence vendor gets its data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to make sure you don’t pay for redundant data sets.

This is a good use for a TI platform, aggregating intelligence and making sure you only see actionable alerts. As described above, you’ll want to test these services to see how they work for you. In a crowded market vendors try to differentiate by taking liberties with what their services and products actually do. Be careful not to fall for marketing hyperbole about proprietary algorithms, Big Data analysis, staff linguists penetrating hacker dens, or other stories straight out of a spy novel. Buyer beware, and make sure you put each provider through its paces before you commit.

Our last point on external data in your TI program concerns short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many threat intelligence companies are startups, and might not be around in 3-4 years. Once you identify a set of core intelligence feeds that work consistently and effectively you can look at longer deals, but we recommend not doing that until your TI process matures and your intelligence vendor establishes a track record.

Providing Context

One of the things you have to keep in mind is the sheer number of indicators that come into play, especially when using multiple threat intelligence services. So you need to make sure you build a step into your TI process is to provide context for the threat intelligence feeds prior to operationalizing them. That means you want to tailor what gets fed into the TI platform (which we discuss in the next post), so that all searching and indexing is only done for intelligence sources that are relevant for your organization. To use a very simplistic example, if you only have Mac devices on your network, getting a bunch of TI indicators for Windows Vista attacks will just clutter up your system and impact performance.

It’s a typical funnel concept. There are millions of indicators that you can get via a TI service. Only hundreds may apply to your environment. You want to do some pre-processing of the TI as it comes into your environment to get rid of the data that isn’t relevant making any alerts more actionable and allowing you to prioritize efforts on the attacks that present real risk.

Now that you have selected threat intelligence feeds, you need to put it to work. Our next post will focus on what that means, and how TI can favorably impact your security program.

—Mike Rothman

Summary: Law Enforcement and the Cloud

By Rich

While the big story this week was the FBI vs. Apple, I’d like to highlight something a little more relevant to our focus on the cloud. You probably know about the DOJ vs. Microsoft. This is a critically important case where the US government wants to assert access on the foreign branch of a US company, putting it in conflict with local privacy laws. I highly recommend you take a look, and we will post updates here.

Beyond that, I’m sick and shivering with a fever, so enough small talk and time to get to the links. Posting is slow for us right now because we are all cramming for RSA, but you are probably used to that.

BTW – it’s hard to find good sources for cloud and DevOps news and tutorials. If you have links, please email them to <mailto::info@securosis.com>.

If you want to subscribe directly to the Friday Summary only list, just click here.

And don’t forget:

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, and if you have submissions please email them to info@securosis.com.

One issue that comes up a lot in client engagements is the best “unit of deployment” to push applications into production. That’s a term I might have made up, but I’m an analyst, so we do that. Conceptually there are three main ways to push application code into production:

  1. Update code on running infrastructure. Typically using configuration management tools (Chef/Puppet/Ansible/Salt), code-specific deployment tools like Capistrano, or a cloud-provider specific tool like AWS CodeDeploy. The key is that a running server is updated.
  2. Deploy custom images, and use them to replace running instances. This is the very definition of immutable because you never log into or change a running server, you replace it. This relies heavily on auto scaling. It is a more secure option, but it can take time for the new instances to deploy depending on complexity and boot time.
  3. Containers. Create a new container image and push that. It’s similar to custom images, but containers tend to launch much more quickly.

As you can guess, I prefer the second two options because I like locking down my instances and disabling any changes. That can really take security to the next level. Which brings us to our tool this week, Packer by HashiCorp. Packer is one of the best tools to automate creation of those images. It integrates with nearly everything, works on multiple cloud and container platforms, and even includes its own lightweight engine to run deployment scripts.

Packer is an essential tool in the DevOps / cloud quiver, and can really enhance security because it enables you to adopt immutable infrastructure.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting all our RSA Conference Guide posts over at the RSA Conference blog – here are the latest:

Training and Events