Login  |  Register  |  Contact
Wednesday, March 09, 2016

Incite 3/9/2016: Star Lord

By Mike Rothman

Everything is a game nowadays. Not like Words with Friends (why yes, since you ask – I do enjoy getting my ass kicked by the women in my life) or even Madden Mobile (which the Boy plays constantly) – I’m talking about gamification. In our security world, the idea is that rank and file employees will actually pay attention to security stuff they don’t give a rat’s ass about… if you make it all into a game. So get departments to compete for who can do best in the phishing simulation. Or give a bounty to the team with the fewest device compromises due to surfing pr0n. Actually, though, it might be more fun to post the link that compromised the machine in the first place. The employee with the nastiest NSFW link would win. And get fired… But I digress.

I find that I do play these games. But not on my own device. I’m kind of obsessed with Starbucks’ loyalty program. If you accumulate 12 stars you get a free drink. It’s a great deal for me. I get a large brewed coffee most days. I don’t buy expensive lattes, and I get the same star for every drink I buy. And if I have the kids with me, I’ll perform 3 or 4 different transactions, so I can get multiple stars. When I get my reward drink, I get a 7 shot Mocha. Yes, 7 shots. I’m a lot of fun in the two hours after I drink my reward.

And then Starbucks sends out promotions. For a while, if you ordered a drink through their mobile app, you’d get an extra star. So I did. I’d sit in their store, bust open my phone, order the drink, and then walk up to the counter and get it. Win! Extra star! Sometimes they’d offer 3 extra stars if you bought a latte drink, an iced coffee, and a breakfast sandwich within a 3-day period. Well, a guy’s gotta eat, right? And I was ordering the iced coffee anyway in the summer. Win! Three bonus stars. Sometimes they’d send a request for a survey and give me a bunch of stars for filling it out. Win! I might even be honest on the survey… but probably not. As long as I get my stars, I’m good.

Yes, I’m gaming the system for my stars. And I have two reward drinks waiting for me, so evidently it’s working. I’m going to be in Starbucks anyway, and drinking coffee anyway – I might as well optimize for free drinks.

star lord

Oh crap, what the hell have I become? A star whore? Ugh. Let’s flip that perspective. I’m the Star Lord. Yes! I like that. Who wants to be Groot?

Pretty much every loyalty program gets gamed. If you travel like I do, you have done the Dec 30 or 31 mileage run to make the next level in a program. You stay in a crappy Marriott 20 miles away from your meeting, instead of the awesome hotel right next to the client’s office. Just to get the extra night. You do it. Everyone does.

And now it’s a cat and mouse game. The airlines change their programs every 2-3 years, to force customers to find new ways to optimize milage accumulation. Starbucks is changing their program to reward customers based on what they spend. The nerve of them. Now it will take twice as long to get my reward drinks. Until I figure out how to game this version of the program. And I will, because to me gaming their game is the game.

–Mike

Photo credit: “Star-Lord ord” from Dex


We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers


Incite 4 U

  1. An expensive lie: Many organizations don’t really take security seriously. It has never been proven that breaches cause lost business (correlation is not causation), nor have compliance penalties been sufficient to spur action. Is that changing? Maybe. You can see a small payment processor like Dwolla getting fined $100K for falsely claiming that “information is securely encrypted and stored”. Is $100K enough? Does it need to be $100MM? I don’t know, but at some point these regulations should have enough teeth taht companies start to take them seriously. But you have to wonder, if a fintech start-up isn’t “securely encrypting and storing” customer data, what the hell are they doing? – MR

  2. Payment tokens for you and me: NFC World is reporting that Visa will retire alternate PANs issued to host card emulators for mobile payments, without giving an actual EOL date. We have been unable to verify this announcement, but it’s not surprising because that specification is at odds with EMVco’s PAR tokenization approach, which we discussed last year – which is leveraged by ApplePay, SamsungPay, and others. This is pretty much the end of host card emulation and any lingering telco secure element payment schemes. What is surprising many people is the fact that, if you read Visa and Mastercard’s recent announcements, they are both positioning themselves as cloud-based security vendors – offering solutions for identity and payment in cars, wearables, and other mobile devices. Visa’s Tokenization Services, Mastercard’s tokens, and several payment wallets all leverage PAR tokens provided by various Tokenization-as-a-Service offerings. And issuing banks are buying this service as well! For security and compliance folks this is good news, because the faster this conversion happens, the faster the enterprise can get rid of credit cards. And once those are gone, so too are all the supporting security functions you need to manage. Security vendors, take note: you have new competitors in mobile device security services. – AL

  3. Well, at least the pace of tech innovation is slowing… I can do nothing but laugh at the state of security compliance. The initiative that actually provided enough detail to help lost organizations move forward, the PCI-DSS, is evidently now very mature. So mature that they don’t need another major update. Only minor updates, with long windows to implement them, because.. well, just because. These retailers are big and they move slowly. But attackers move and innovate fast. So keeping our current low bar forever seems idiotic. Attackers are getting better, so we need to keep raising the bar, and I don’t know how that will happen now. I guess it will take another wave of retailer hacks to shake things up again. Sad. – MR

  4. No need to encrypt: Future Tense captures the essence of Amazon’s removal of encryption from Fire devices: Inexpensive parts, like weak processors, would be significantly burdened when local encryption was on, and everything would slow down. This is not about bowing to federal pressure – it is cost-cutting on a money-losing device. And let’s be honest – these are not corporate devices, and no one reading this allows Amazon Fires onto their business networks. Not every mobile device deserves security hardening. Most people have a handful of devices with throw-away data, and convenience devices need very little security. The handful of people I know with Kindle or Fire devices consider them mobile infotainment systems – the only data on the device is a Gmail account, which has already been hacked, and the content they bought from Amazon. Let’s pick our battles. – AL

  5. I don’t get it, but QUANTUM! I wish I knew more about things like quantum computing, and that I had time to read papers and the like to get informed. Evidently progress is being made on new quantum computing techniques that will make current encryption obsolete. Now they have a 5-atom quantum computer. I have no idea what that even means, but it sounds cool. Is it going to happen tomorrow? Nope. I won’t be able to get a quantum computer from Amazon for a while, but the promise of these new technologies to upend the way we have always done things is useful reminder. Don’t get attached to anything. Certainly not technology, because it’s not going to be around for long. Whichever technology we’re talking about. – MR

—Mike Rothman

Tuesday, March 08, 2016

SIEM Kung Fu: Advanced Use Cases

By Mike Rothman

Given the advance of SIEM technology, the use cases described in the first post of our SIEM Kung Fu series are very achievable. But with the advent of more packaged attack kits leveraged by better organized (and funded) adversaries, and the insider threat, you need to go well beyond what comes out of the [SIEM] box, and what can be deployed during a one-week PoC, to detect real advanced attacks.

So as we dig into more advanced use cases we will tackle how to optimize your SIEM to both a) detect advanced attacks and b) track user activity, to identify possible malicious insider behavior. There is significant overlap between these two use cases. Ultimately, in almost every successful attack, the adversary gains presence on the network and therefore is technically an insider. But let’s take adversaries out of play here, because in terms of detection, whether the actor is external or internal to your organization doesn’t matter. They want to get your stuff.

So we’ll break up the advanced use cases by target. It might be the application stack directly (from the outside), to establish a direct path to the data center, without requiring any lateral movement to achieve the mission. The other path is to compromise devices (typically through an employee), escalate privileges, and move laterally to achieve the mission. Both can be detected by a properly utilized SIEM.

Attacking Employees

The most prominent attack vector we see in practice today is the advanced attack, which is also known as an APT or a kill chain, among other terms. But regardless of what you call it, this is a process which involves an employee device being compromised, and then used as a launching point to systematically move deeper within an organization – to find, access, and exfiltrate critical information. Detecting this kind of attack requires looking for anomalous behavior at a variety of levels within the environment. Fortunately employees (and their devices) should be reasonably predictable in what they do, which resources they access, and their daily traffic patterns.

In a typical device-centric attack an adversary follows a predictable lifecycle: perform reconnaissance, send an exploit to the device, and escalate privileges, then use that device as a base for more reconnaissance, more exploits, and to burrow further into the environment. We have spent a lot of time on how threat detection needs to evolve and how to catch these attacks using network-based telemetry.

Leveraging your SIEM to find these attacks is similar; it involves understanding the trail the adversary leaves, the resulting data you can analyze, and patterns to look for. An attacker’s trail is based specifically on change. During any attack the adversary changes something on the device being attacked. Whether it’s the device configuration, creating new user accounts, increasing account privileges, or just unusual traffic flows, the SIEM has access to all this data to detect attacks.

Initial usage of SIEM technology was entirely dependent on infrastructure logs, such as those from network and security devices. That made sense because SIEM was initially deployed to stem the flow of alerts streaming in from firewalls, IDS, and other network security devices. But that offered a very limited view of activity and eventually become easy for adversaries to evade. So over the past decade many additional data sources have been integrated into the SIEM to provide a much broader view of your environment.

  • Endpoint Telemetry: Endpoint detection has become very shiny in security circles. There is a ton of interest in doing forensics on endpoints, and if you are trying to figure out how the proverbial horse left the barn, endpoint telemetry is great. Another view is that devices are targeted in virtually every attack, so highly detailed data about exactly what’s happening on an endpoint is critical – not just to incident response, but also to detection. And this data (or the associated metadata) can be instrumental when watching for the kind of change that may indicate an active threat actor.
  • Identity Information: Inevitably, once an adversary has presence in your environment, they will go after your identity infrastructure, because that is usually the path of least resistance for access to valuable data. So you need access to identity stores; watch for new account creation and new privilege entitlements, which are both likely to identify attacks in process.
  • Network Flows: The next step in the attack is to move laterally within the environment, and move data around. This leaves a trail on the network that can be detected by tracking network flows. Of course full packet capture provides the same information and more granularity, with a greater demand for data collection and analytics.
  • Threat Intelligence: Finally, you can leverage external threat data and IP reputation to pinpoint egress network traffic that may headed places you know are bad. Exfiltration now typically includes proprietary encryption, so you aren’t likely to catch the act through content analysis; instead you need to track where data is headed. You can also use threat intelligence indicators to watch for specific new attacks in your environment, as we have discussed ad nauseum in our threat intelligence and security monitoring research.

The key to using this data to find advanced attacks is to establish a profile of what’s normal within your environment, and then look for anomalous activity. We know anomaly detection has been under discussion in security circles for decades, but it is still one of the top ways to figure out when attackers are doing their thing in your environment. Of course keeping your baseline current and minimizing false positives are keys to making a SIEM useful for this use case. That requires ongoing effort and tuning. Of course no security monitoring tool just works – so go in with your eyes open regarding the amount of work required.

Multiple data points

Speaking of minimizing false positives, how can you do that? More SIEM projects fail due to alert exhaustion than for any other reason, so don’t rely on any single data point to produce a verdict that an alert is legitimate and demands investigation. Reduction of false positives is even more critical because of the skills gap which continues to flummox security professionals. Using a SIEM you can link together seemingly disconnected data sources to validate alerts and make sure the alarm is sounded only when it should be.

But what does that look like in practice? You need to make sure a variety of conditions are matched before an alert fires. And increase the urgency of an alert according to the number of conditions triggered. This simplified example illustrates what you can do with the SIEM you likely already have.

  1. Look for device changes: If a device suddenly registers a bunch of new system files installed, and you aren’t in the middle of a patch cycle, there may be something going on. Is that enough to pull the alarm? Probably not yet.
  2. Track identity: Next you’ll see a bunch of new accounts appear on the device, and then the domain controller targeted for compromise. Once the domain controller falls, it’s pretty much game over, because the adversary can then set up new accounts and change entitlements; so tracking the identify infrastructure is essential.
  3. Look for internal reconnaissance: Finally you’ll see the compromised device scanning everything else on the network, both so the attacker can gain his/her bearings, and also for additional devices to compromise. Traffic on internal network segments should be pretty predictable, so variations from typical traffic flows usually indicate something funky.

But do any of these data points alone indicate an attack? Probably not. But if you see multiple indicators at the same time, odds are that’s not great for you.

Modern SIEMs come with a variety of rules or policies to look for common attack patterns. They are helpful for getting started, and increasing use of data analytics will help you refine your thresholds for alerts, increasing accuracy and reducing false alarms.

Application Stack Attacks

We alluded to this above, but to us an “application stack attack” is not just a cute rhyme, but how a sophisticated adversary takes advantage of weaknesses within an application or another part of an application stack, to gain a foothold in your environment to access data of interest. There are a number of application stack data sources you can pump into a SIEM to look for attacks on the application. These include:

  • Machine Data: The first step in monitoring applications is to instrument it to generate “machine data”. This could be information on different transaction types or login failures, search activity, or almost anything that can be compromised by an attacker. Determining how and where to instrument an application involves threat modeling the application to make sure the necessary hooks are built into the app. The good news is that as more and more applications move to SaaS environments, a lot of this instrumentation is there from the start. But with SaaS you get what you get, and don’t have much influence on which information is available.
  • APIs: Applications are increasingly composed of a variety of components, residing in a variety of different places (both inside and outside your environment), so watching API traffic has become key. We have researched API security, so refer back to that paper for specifics about authentication and authorizing specific API calls. You will want to track API usage and activity to profile normal activity for the application, and then start looking for anomalies.
  • Database Tier: This last part of the application stack is where the valuable stuff lives. Once an attacker has presence in the database tier, it is usually trivial to access other database tables and reach the stuff they are looking for. So ingest any database activity logs or monitors available, and watch for triggers.

Each application is unique (like a snowflake!) so you won’t be able to get prebuilt rules and policies from your SIEM provider. You need to look at each application to monitor and profile it, building rules and tuning thresholds for the specific application. This is why most organizations don’t monitor their applications to any significant degree… And also why they miss attacks which don’t involve traditional malware or obvious attack patterns.

Developer Resistance

Collecting sufficient machine data from applications isn’t something most developers are excited about. Applications have historically not been built with instrumentation in mind, and retrofitting instrumentation into an app is more delicate plumbing than designing cool new features. We all know how much developers love to update plumbing. You may need to call for senior management air cover, in the form of a mandate, to get the instrumentation you need into the application. You can only request air support a limited number of times, so make sure the application is sufficiently important first.

More good news: as new applications are deployed using modern development techniques (including DevOps and Continuous Deployment), security is increasingly being built into the stack at a fundamental level. Once the right instrumentation is in the stack, you can stop fighting to retrofit it.

Purpose-built Tools

You are likely to be approached by a variety of new security companies, offering security analytics products better at finding advanced attacks. Do you need yet another tool? Shouldn’t you be able to do this within your SIEM?

The answer is: it depends. Analytics platforms built around a specific use case, like APT or the insider threat, are optimized for a very specific problem. The vendor should know what data is required, where to get it, and how to tune their analytics engine to solve the specific problem. A more general-purpose SIEM cannot be as tuned to solve that specific problem. Your vendor can certainly provide some guidance, and maybe even some pre-packaged correlation rules, but more work will still be required to configure the use case and tune the tool.

On the other hand a security analytics platform is not designed around a SIEM’s other uses. It cannot help you prepare for an audit by generating reports pertinent to the assessment. It won’t offer much in the way of forensics and investigation. These analytics tools just weren’t built to do that, so you’ll still need your SIEM – which means you’ll have two (or more) products for security monitoring; with all the associated purchase, maintenance, and operational costs.

Now that you understand a bit more about how to use a SIEM to address advanced use cases, you need to be able to use your newfound SIEM Kung Fu consistently and systematically. So it’s time to revisit your process in order to factor in the requirements for these advanced use cases. We’ll discuss that in our next post.

—Mike Rothman

Monday, February 29, 2016

Incite 2/29/2016: Leap Day

By Mike Rothman

Today is leap day, the last day of February in a leap year. That means the month of February has 29 days. It happens once every 4 years. I have one friend (who I know of) with a birthday on Leap Day. That must have been cool. You feel very special every four years. And you just jump on the Feb 28 bandwagon to celebrate your birthday in non-leap years. Win/win.

The idea of a four-year cycle made me curious. What was I doing during leap day in 2012? Turns out I was doing the same thing I’ll be doing today – running between meetings at the RSA Conference. This year, leap day is on Monday, and that’s the day I usually spend at the America’s Growth Capital Conference, networking with CEOs and investors. It’s a great way to take the temperature of the money side of the security industry. And I love to moderate the panels, facilitating debate between leaders of the security industry. Maybe I’ll even interject an opinion or two during the event. That’s been known to happen.

leap day

Then I started looking back at my other calendar entries for 2012. The boy was playing baseball. Wow, that seems like a long time ago since it seems like forever he’s been playing lacrosse. The girls were dancing, and they had weekend practices getting ready for their June Disney trip. XX1 was getting ready for her middle school orientation. Now she’s in high school. The 4 years represent less than 10% of my life. But a full third of the twins’ existence. That’s a strange thought.

And have I made progress professionally? I think so. Our business has grown. We’ll have probably three times the number of people at the Disaster Recovery Breakfast, if that’s any measure of success. The cloud security work we do barely provided beer money in 2012, and now it’s the future of Securosis. I’ve deepened relationships with some clients and stopped working with others. Many of my friends have moved to different gigs. But overall I’m happy with my professional progress.

Personally I’m a fundamentally different person. I have described a lot of my transformation here in the Incite, or at least its results. I view the world differently now. I was figuring out which mindfulness practices worked for me back in 2012. That was also the beginning of a multi-year process to evaluate who I was and what changes I needed for the next phase of my life. Over the past four years, I have done a lot of work personally and made those changes. I couldn’t be happier with the trajectory of my life right now.

So this week I’m going to celebrate with many close friends. Security is what I do, and this week is one of the times we assemble en masse. What’s not to love? Even cooler is that I have no idea what I’ll be writing about in 2020.

My future is unwritten, and that’s very exciting. I do know that by the next time a leap year comes along, XX1 will be midway through college. The twins will be driving (oy, my insurance bill!). And in all likelihood, I’ll be at the RSA Conference hanging out with my friends at the W, waiting patiently for a drink. Most things change, but some stuff stays the same. And there is comfort in that.

–Mike

Photo credit: “60:366” from chrisjtse


We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes you’ll see at this year’s conference (which is really a proxy for the industry), along with deep dives into cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the post or download the guide directly (PDF).

It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers


Incite 4 U

  1. Phisherman’s dream: Brian Krebs has written a lot about small and mid-sized companies being targets for scammers over the last couple years, with both significant financial losses directly from fraud, and indirectly from the ensuing court battles about who ends up paying the bill. Through friends and family, we have been hearing a lot more about this in relation to real estate transactions, captured in recent article from the Arizona Association of Realtors Hackers Perpetuate Wire Transfer Fraud Scams. Hacking the buyers, mortgage brokers, and title companies, scammers are able to both propel a transaction forward through fake authorizations, and direct funds to the wrong accounts. And once one party is compromised it’s fairly easy to get the other parties too, meaning much of the process can be orchestrated remotely. What’s particularly insidious is that these attacks naturally lead all parties into making major security misjudgments. You trust the emails because they look like they are coming from people you are waiting to hear from, with content you want to see. The result is large sums of money willingly transferred to the wrong accounts; with buyers, sellers, agents, banks, and mortgage brokers all fighting to clean up the mess. – AL

  2. EMET and the reality of software: This recent story about a defect in Microsoft’s EMET which allows attackers to basically turn it off, presents an opportunity to highlight a number of things. First, all software has bugs. Period. This bug, found by the folks at FireEye, turns EMET against itself. It’s code. It’s complicated. And that means there will be issues. No software is secure. Even the stuff that’s supposed to secure us. But EMET is awesome and free. So use it. The other big takeaway from this is the importance of timely patching. Microsoft fixed this issue last Patch Tuesday, Feb 2. It’s critical to keep devices up to date. I know it’s hard and you have a lot of devices. Do it anyway. It’s one of the best ways to reduce attack surface. – MR

  3. My list: On the Veracode blog Jeff Cratty explains to security pros the 5 things I need from you. Discussions like this are really helpful for security people trying to work with developers. Understanding the challenges and priorities each side faces every day makes working together a hell of a lot easier. Empathy FTW. I like Jeff’s list, but I could narrow down mine to two things. First, get me the “air cover” I need to prioritize security over features. Without empowerment by senior management, security issues will never get worked on. DevOps and continuous integration has been great in this regard as teams – for the first time ever – prioritize infrastructure over features, but someone needs to help get security onto the queue. Second, tell me the threats I should really worry about, and get me a list of suitable responses so I can choose what is best for our application stack and deployment model. There are usually many ways to address a specific risk, and I want options, not mandates. – AL

  4. Cutting through the fog of endpoint security marketing: If you are considering updating your endpoint protection (as you should be), Lenny Zeltser offers a great post on questions to ask an endpoint security startup. It’s basically a primer to make any new generation endpoint security player educate you on why and how they are different. They’ll say, “we use math,” like that’s novel. Or “we leverage the cloud” – ho hum. Maybe they’ll drop “deep forensics” nonsense on you. Not that any of those things are false. But it’s really about understanding how they are different. Not just from traditional endpoint protection, but also from the dozens of other new endpoint security players. Great job, Lenny. It’s hard to separate marketing fiction from fact in early markets. Ask these questions to start figuring it out. And make sure your BS detector is working – you’ll need it. – MR

—Mike Rothman

Friday, February 26, 2016

Summary: The Cloud Horizon

By Adrian Lane

By Adrian

Two weeks ago Rich sketched out some changes to our Friday Summary, including how the content will change. But we haven’t spelled out our reasons. Our motivation is simple. In a decade, over half your systems will be in some cloud somewhere. The Summary will still be about security, but we’ll focus on security for cloud services, cloud applications, and how DevOps techniques intertwine with each. Rather than rehash on-premise security issues we have covered (ad nauseum) for 9 years, we believe it’s far more helpful to IT and security folks to discuss what is on the near horizon which they are not already familiar with. We can say with certainty that most of what you’ve learned about “the right way to do things” in security will be challenged by cloud deployments, so we are tuning the Summary to increase understanding the changes in store, and what to do about them. Trends, features, tools, and even some code. We know it’s not for everybody, but if you’re seriously interested, you can subscribe directly to the Friday Summary.

The RSA conference is next week, so don’t forget to get a copy of Securosis’s Guide to the RSA Conference. But be warned; Mike’s been at the meme generator again, and some things you just can’t unsee. Oh, and if you’re interested in attending the Eighth Annual Securosis Disaster Recovery Breakfast at RSA, please RSVP. That way we know how much bacon to order. Or Bloody Marys to make. Something like that.

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, so if you have submissions please email them to info@securosis.com.

Alerts literally drive DevOps. One may fire off a cloud-based service, or it might indicate a failure a human needs to look at. When putting together a continuous integration pipeline, or processing cloud services, how do you communicate status? SMS and email are the common output formats, and developer tools like Slack or bug tracking systems tend to be the endpoints, but it’s hard to manage and integrate the streams of automated outputs. And once you get one message of a particular event type, you usually don’t want to see that event again for a while. You can create a simple web console, or use AWS to stream to specified recipients, but that’s all manual setup. Things like Slack can help with individuals, team, and third parties, but managing them is frankly a pain in the ass. As you scale up cloud and DevOps processes it’s easy to get overwhelmed. One of the tools I was looking at this week was (x)matters, which provides an integration and management hub for automated messages. It can understand messages from multiple sources and offers aggregation to avoid over-pinging users. I have not seen many products addressing this problem, so I wanted to pass it along.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting our whole RSA Conference Guide as posts over at the RSA Conference blog – here are the latest:

Training and Events

—Adrian Lane

Thursday, February 25, 2016

Presenting the RSA Conference Guide 2016

By Mike Rothman

Apparently the RSA Conference folks failed to regain their senses after letting us have free reign last year to post our RSA Conference Guide to the conference blog. We changed the structure this year, and here is how we explained it in the introductory post of the Guide.

In previous years the RSAC-G followed a consistent format. An overview of top-level trends and themes you would see at the show, a deep dive into our coverage areas, and a breakout of what’s on the show floor. We decided to change things up this year. The conference has grown enough that our old format doesn’t make as much sense. And we are in the middle of shaking up the company, so might as well update the RSAC-G while we’re at it.

This year we’ll still highlight main themes, which often set the tone for the rest of the security presentations and marketing you see throughout the year. But instead of deep dives into our coverage areas, we are focusing on projects and problems we see many clients tackling. When you go to a conference like RSA, it isn’t really to learn about technology for technology’s sake–you are there to learn how to solve (or at least manage) particular problems and projects.

This year our deep dives are structured around the security problems and projects we see toping priority lists at most organizations. Some are old favorites, and others are just hitting the radar for some of you. We hope the new structure is a bit more practical. We want you able to pop open the Guide, find something at the top of your list, jump into that section, and know where to focus your time.

Then we take all that raw content and format it into a snazzy PDF with a ton of meme goodness. So you can pop the guide onto your device and refer to it during the show.

Without further ado, we are excited to present the entire RSA Conference Guide 2016 (PDF).

Just so you can get a taste of the meme awesomeness of the published Guide, check out this image.

forensics diaper

That’s right. We may be changing the business a bit, but we aren’t going to get more politically correct, that’s for sure. And it’s true. Most n00b responders soil their pants a bit until they get comfortable during incidents.

And in case you want to check out the posts on the RSAC blog:

Introduction

Key Themes

Yes, all the key themes have a Star Wars flavor. Just because we can.

Deep Dives

—Mike Rothman

Friday, February 19, 2016

Do We Have a Right to Security?

By Rich

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even feasibility are all irrelevant.

Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities.

Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers.

Everything, all of it, boils down to a single question.

Do we have a right to security?

This isn’t the government vs. some technology companies. It’s the government vs. your right to fundamental security in the digital age.

Vendors like Apple have hit the point where some of the products they make, for us, are so secure that it is nearly impossible, if not impossible, to crack them. As a lifetime security professional, this is what my entire industry has been dreaming of since the dawn of computers. Secure commerce, secure communications, secure data storage. A foundation to finally start reducing all those data breaches, to stop China, Russia, and others from wheedling their way into our critical infrastructure. To make phones so secure they almost aren’t worth stealing, since even the parts aren’t worth much.

To build the secure foundation for the digital age that we so lack, and so desperately need. So an entire hospital isn’t held hostage because one person clicked on the wrong link.

The FBI, DOJ, and others are debating whether secure products and services should be legal. They hide this in language around warrants and lawful access, and scream about terrorists and child pornographers. What they don’t say, what they never admit, is that it is impossible to build in back doors for law enforcement without creating security vulnerabilities.

It simply can’t be done. If Apple, the government, or anyone else has master access to your device, to a service, or communications, that is a security flaw. It is impossible for them to guarantee that criminals or hostile governments won’t also gain such access. This isn’t paranoia, it’s a demonstrable fact. No company or government is completely secure.

And this completely ignores the fact that if the US government makes security illegal here, that destroys any concept of security throughout the rest of the world, especially in repressive regimes. Say goodbye to any possibility of new democracies. Never mind the consequences here at home. Access to our phones and our communications these days isn’t like reading our mail or listening to our phone calls – it’s more like listening to whispers to our partners at home. Like tracking how we express our love to our children, or fight the demons in our own minds.

The FBI wants this case to be about a single phone used by a single dead terrorist in San Bernadino to distract us from asking the real question. It will not stop at this one case – that isn’t how law works. They are also teaming with legislators to make encrypted, secure devices and services illegal. That isn’t conspiracy theory – it is the stated position of the Director of the FBI. Eventually they want systems to access any device or form of communications, at scale. As they already have with our phone system. Keep in mind that there is no way to limit this to consumer technologies, and it will have to apply to business systems as well, undermining corporate security.

So ignore all of that and ask yourself, do we have a right to security? To secure devices, communications, and services? Devices secure from criminals, foreign governments, and yes, even our own? And by extension, do we have a right to privacy? Because privacy without security is impossible.

Because that is what this fight is about, and there is no middle ground, mystery answer hiding in a research project, or compromise. I am a security expert. I have spent 25 years in public service and most definitely don’t consider myself a social activist. I am amused by conspiracy theories, but never take them seriously. But it would be unconscionable for me to remain silent when our fundamental rights are under assault by elements within our own government.

—Rich

Building a Threat Intelligence Program: Gathering TI

By Mike Rothman

[Note: We received some feedback on the series that prompted us to clarify what we meant by scale and context towards the end of the post. See? We do listen to feedback on the posts. - Mike]

We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need.

Defining TI Requirements

A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you?

As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues:

  1. Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start.
  2. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research.
  3. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection).

Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them.

The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls.

When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions.

Budgeting

After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable.

When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future.

Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI.

How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost.

As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know.

While we are discussing money, this is a good point to start thinking about how to quantify the value of your TI investment. You defined your requirements, so within each use case how will you substantiate value? Is it about the number of attacks you block based on the data? Or perhaps an estimate of how adversary dwell time decreased once you were able to search for activity based on TI indicators. It’s never too early to start defining success criteria, deciding how to quantify success, and ensuring you have adequate metrics to substantiate achievements. This is a key topic, which we will dig into later in this series.

Selecting Data Sources

Next you start to gather data to help you identify and detect the activity of potential adversaries in your environment. You can get effective threat intelligence from a variety of different sources. We divide security monitoring feeds into five high-level categories:

  • Compromised Devices: This data source provides external notification that a device is acting suspiciously by communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators to search for within your environment, as Malware Analysis Quant described in gory detail.
  • IP Reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses. IP reputation has evolved since its introduction, now featuring scores to compare the relative maliciousness of different addresses, as well as factoring in additional context such as Tor nodes/anonymous proxies, geolocation, and device ID to further refine reputation.
  • Command and Control Networks: One specialized type of reputation often packaged as a separate feed is intelligence on command and control (C&C) networks. These feeds track global C&C traffic and pinpoint malware originators, botnet controllers, and other IP addresses and sites you should look for as you monitor your environment.
  • Phishing Messages: Most advanced attacks seem to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically use email as the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and tactics.

These security data types are available in a variety of packages. Here are the main categories:

  • Commercial integrated: Every security vendor seems to have a research group providing some type of intelligence. This data is usually very tightly integrated into their product or service. Sometimes there is a separate charge for the intelligence, and other times it is bundled into the product or service.
  • Commercial standalone: We see an emerging security market for standalone threat intel. These vendors typically offer an aggregation platform to collect external data and integrate into controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster around specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data for an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally open source intel encompasses a variety of publicly available sources for things like malware samples and IP reputation, which can be integrated directly into other systems.

The best way to figure out which data sources are useful is to actually use them. Yes, that means a proof of concept for the services. You can’t look at all the data sources, but pick a handful and start looking through the feeds. Perhaps integrate data into your monitors (SIEM and IPS) in alert-only mode, and see what you’d block or alert on, to get a feel for its value. Is the interface one you can use effectively? Does it take professional services to integrate the feed into your environment? Does a TI platform provide enough value to look at it every day, in addition to the 5-10 other consoles you need to deal with? These are all questions you should be able to answer before you write a check.

Company-specific Intelligence

Many early threat intelligence services focused on general security data, identifying malware indicators and tracking malicious sites. But how does that apply to your environment? That is where the TI business is going. Both providing more context for generic data, and applying it to your environment (typically through a Threat Intel Platform), as well as having researchers focus specifically on your organization.

This company-specific information comes in a few flavors, including:

  • Brand protection: Misuse of a company’s brand can be very damaging. So proactively looking for unauthorized brand uses (like on a phishing site) or negative comments in social media fora can help shorten the window between negative information appearing and getting it taken down.
  • Attacker networks: Sometimes your internal detection capabilities fail, so you have compromised devices you don’t know about. These services mine command and control networks to look for your devices. Obviously it’s late if you find your device actively participating in these networks, but better find it before your payment processor or law enforcement tells you you have a problem.
  • Third party risk: Another type of interesting information is about business partners. This isn’t necessarily direct risk, but knowing that you connect to networks with security problems can tip you to implement additional controls on those connections, or more aggressively monitor data exchanges with that partner.

The more context you can derive from the TI, the better. For example, if you’re part of a highly targeted industry, information about attacks in your industry can be particularly useful. It’s also great to have a service provider proactively look for your data in external forums, and watch for indications that your devices are part of attacker networks. But this context will come at a cost; you will need to evaluate the additional expense of custom threat information and your own ability to act on it. This is a key important consideration. Additional context is useful if your security program and staff can take advantage of it.

Managing Overlap

If you use multiple threat intelligence sources you will want to make sure you don’t get duplicate alerts. Key to determining overlap is understanding how each intelligence vendor gets its data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to make sure you don’t pay for redundant data sets.

This is a good use for a TI platform, aggregating intelligence and making sure you only see actionable alerts. As described above, you’ll want to test these services to see how they work for you. In a crowded market vendors try to differentiate by taking liberties with what their services and products actually do. Be careful not to fall for marketing hyperbole about proprietary algorithms, Big Data analysis, staff linguists penetrating hacker dens, or other stories straight out of a spy novel. Buyer beware, and make sure you put each provider through its paces before you commit.

Our last point on external data in your TI program concerns short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many threat intelligence companies are startups, and might not be around in 3-4 years. Once you identify a set of core intelligence feeds that work consistently and effectively you can look at longer deals, but we recommend not doing that until your TI process matures and your intelligence vendor establishes a track record.

Providing Context

One of the things you have to keep in mind is the sheer number of indicators that come into play, especially when using multiple threat intelligence services. So you need to make sure you build a step into your TI process is to provide context for the threat intelligence feeds prior to operationalizing them. That means you want to tailor what gets fed into the TI platform (which we discuss in the next post), so that all searching and indexing is only done for intelligence sources that are relevant for your organization. To use a very simplistic example, if you only have Mac devices on your network, getting a bunch of TI indicators for Windows Vista attacks will just clutter up your system and impact performance.

It’s a typical funnel concept. There are millions of indicators that you can get via a TI service. Only hundreds may apply to your environment. You want to do some pre-processing of the TI as it comes into your environment to get rid of the data that isn’t relevant making any alerts more actionable and allowing you to prioritize efforts on the attacks that present real risk.

Now that you have selected threat intelligence feeds, you need to put it to work. Our next post will focus on what that means, and how TI can favorably impact your security program.

—Mike Rothman

Summary: Law Enforcement and the Cloud

By Rich

While the big story this week was the FBI vs. Apple, I’d like to highlight something a little more relevant to our focus on the cloud. You probably know about the DOJ vs. Microsoft. This is a critically important case where the US government wants to assert access on the foreign branch of a US company, putting it in conflict with local privacy laws. I highly recommend you take a look, and we will post updates here.

Beyond that, I’m sick and shivering with a fever, so enough small talk and time to get to the links. Posting is slow for us right now because we are all cramming for RSA, but you are probably used to that.

BTW – it’s hard to find good sources for cloud and DevOps news and tutorials. If you have links, please email them to <mailto::info@securosis.com>.

If you want to subscribe directly to the Friday Summary only list, just click here.

And don’t forget:

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, and if you have submissions please email them to info@securosis.com.

One issue that comes up a lot in client engagements is the best “unit of deployment” to push applications into production. That’s a term I might have made up, but I’m an analyst, so we do that. Conceptually there are three main ways to push application code into production:

  1. Update code on running infrastructure. Typically using configuration management tools (Chef/Puppet/Ansible/Salt), code-specific deployment tools like Capistrano, or a cloud-provider specific tool like AWS CodeDeploy. The key is that a running server is updated.
  2. Deploy custom images, and use them to replace running instances. This is the very definition of immutable because you never log into or change a running server, you replace it. This relies heavily on auto scaling. It is a more secure option, but it can take time for the new instances to deploy depending on complexity and boot time.
  3. Containers. Create a new container image and push that. It’s similar to custom images, but containers tend to launch much more quickly.

As you can guess, I prefer the second two options because I like locking down my instances and disabling any changes. That can really take security to the next level. Which brings us to our tool this week, Packer by HashiCorp. Packer is one of the best tools to automate creation of those images. It integrates with nearly everything, works on multiple cloud and container platforms, and even includes its own lightweight engine to run deployment scripts.

Packer is an essential tool in the DevOps / cloud quiver, and can really enhance security because it enables you to adopt immutable infrastructure.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting all our RSA Conference Guide posts over at the RSA Conference blog – here are the latest:

Training and Events

—Rich

Wednesday, February 17, 2016

Firestarter: RSA Conference—the Good, Bad, and the Ugly

By Rich

Every year we focus a lot on the RSA Conference. Love it or hate it, it is the biggest event in our industry. As we do every year, we break down some of the improvements and disappointments we expect to see. Plus, we spend a few minutes talking about some of the big changes coming here at Securosis. We cover a possibly-insulting keynote, the improvements in the sessions, and how we personally use the event to improve our knowledge.

Watch or listen:


—Rich

Tuesday, February 16, 2016

Securing Hadoop: Technical Recommendations

By Adrian Lane

Before we wrap up this series on securing Hadoop databases, I am happy to announce that Vormetric has asked to license this content, and Hortonworks is also evaluating a license as well. It’s community support that allows us to bring you this research free of charge. Also, I’ve received a couple email and twitter responses to the content; if you have more input to offer, now is the time to send it along to be evaluated with the rest of the feedback as we will assembled the final paper in the coming week. And with that, onto the recommendations.

The following are our security recommendations to address security issues with Hadoop and NoSQL database clusters. The last time we made recommendations we joked that many security tools broke Hadoop scalability; you’re cluster was secure because it was likely no one would use it. Fast forward four years and both commercial and open source technologies have advanced considerably, not only addressing threats you’re worried about, but were designed specifically for Hadoop. This means the possibility a security tool will compromise cluster performance and scalability are low, and that integration hassles of old are mostly behind us.

In fact, it’s because of the rapid technical advancements in the open source community that we have done an about-face on where to look for security capabilities. We are no longer focused on just 3rd party security tools, but largely the open source community, who helped close the major gaps in Hadoop security. That said, many of these capabilities are new, and like most new things, lack a degree of maturity. You still need to go through a tool selection process based upon your needs, and then do the integration and configuration work.

Requirements

As security in and around Hadoop is still relatively young, it is not a forgone conclusion that all security tools will work with a clustered NoSQL database. We still witness instances where vendors parade the same old products they offer for other back-office systems and relational databases. To ensure you are not duped by security vendors you still need to do your homework: Evaluate products to ensure they are architecturally and environmentally consistent with the cluster architecture — not in conflict with the essential characteristics of Hadoop.

Any security control used for NoSQL must meet the following requirements: 1. It must not compromise the basic functionality of the cluster. 2. It should scale in the same manner as the cluster. 3. It should address a security threat to NoSQL databases or data stored within the cluster.

Our Recommendations

In the end, our big data security recommendations boil down to a handful of standard tools which can be effective in setting a secure baseline for Hadoop environments:

  1. Use Kerberos for node authentication: We believed – at the outset of this project – that we would no longer recommend Kerberos. Implementation and deployment challenges with Kerberos suggested customers would go in a different direction. We were 100% wrong. Our research showed that adoption has increased considerably over the last 24 months, specifically in response to the enterprise distributions of Hadoop have streamlined the integration of Kerberos, making it reasonably easy to deploy. Now, more than ever, Kerberos is being used as a cornerstone of cluster security. It remains effective for validating nodes and – for some – authenticating users. But other security controls piggy-back off Kerberos as well. Kerberos is one of the most effective security controls at our disposal, it’s built into the Hadoop infrastructure, and enterprise bundles make it accessible so we recommend you use it.
  2. Use file layer encryption: Simply stated, this is how you will protect data. File encryption protects against two attacker techniques for circumventing application security controls: Encryption protects data if malicious users or administrators gain access to data nodes and directly inspect files, and renders stolen files or copied disk images unreadable. Oh, and if you need to address compliance or data governance requirements, data encryption is not optional. While it may be tempting to rely upon encrypted SAN/NAS storage devices, they don’t provide protection from credentialed user access, granular protection of files or multi-key support. And file layer encryption provides consistent protection across different platforms regardless of OS/platform/storage type, with some products even protecting encryption operations in memory. Just as important, encryption meets our requirements for big data security — it is transparent to both Hadoop and calling applications, and scales out as the cluster grows. But you have a choice to make: Use open source HDFS encryption, or a third party commercial product. Open source products are freely available, and has open source key management support. But keep in mind that HDFS encryption engine only protects data on HDFS, leaving other types of files exposed. Commercial variants that work at the file system layer cover all files. Second, they lack some support for external key management, trusted binaries, and full support that commercial products do. Free is always nice, but for many of those we polled, complete coverage and support tilted the balance for enterprise customers. Regardless of which option you choose, this is a mandatory security control.
  3. Use key management: File layer encryption is not effective if an attacker can access encryption keys. Many big data cluster administrators store keys on local disk drives because it’s quick and easy, but it’s also insecure as keys can be collected by the platform administrator or an attacker. And we are seeing Keytab file sitting around unprotected in file systems. Use key management service to distribute keys and certificates; and manage different keys for each group, application, and user. This requires additional setup and possibly commercial key management products to scale with your big data environment, but it’s critical. Most of the encryption controls we recommend depend on key/certificate security.
  4. Use Apache Ranger: In the original version of this research we were most worried about the use of a dozen modules with Hadoop, all deployed with ad-hoc configuration, hidden within the complexities of the cluster, each offering up a unique attack surface to potential attackers. Deployment validation remains at the top of our list of concerns, but Apache Ranger provides a consistent management plane for setting configurations and usage policies to protect data within the cluster. You’ll still need to address issues of patching the Hadoop stack, application configuration, managing trusted machine images, and platform discrepancies. Some of those interviewed used automation scripts and source code control, others leveraged more tradition patch management systems to keep track of revisions, while still others have a management nightmare on their hands. We also recommend use of automation tools, such as Chef and Puppet, to orchestrate pre-deployment tasks in configuration, assembling from trusted images, patching, issuing keys and even running tools like vulnerability scanners prior to deployment. Building the scripts and setting up these services takes time up front, but pays for itself in reduced management time and effort later, and ensures that each node comes online with baseline security in place.
  5. Use Logging and monitoring: To perform forensic analysis, diagnose failures, or investigate unusual behavior, you need a record of activity. You can leverage built in functions of Hadoop to create event logs, and even leverage the cluster itself to store events. Tools like LogStash, Log4J and Kafka help with streaming, management and searching. And there are plug-ins available to stream standardized syslog feeds to supporting SIEM platforms or even Splunk. We also recommend the use of more context aware monitoring tools; in 2012 none of the activity monitoring tools worked with big data platforms, but now they are. These capabilities usually plug into supporting modules like Hive and collect the query, parameters and specifics about the user/application that issues the query. This approach goes beyond basic logging as it can detect misuse and even alter the results a user sees, These also can feed events into native logs, SIEM or even Database Activity Monitoring tools.
  6. Use secure communication: Implement secure communication between nodes, and between nodes and applications. This requires an SSL/TLS implementation that actually protects all network communications rather than just a subset. This imposes a small performance penalty on transfer of large data sets around the cluster, but the burden is shared across all nodes. The real issue for most is setup, certificate issuance and configuration.

The the use of encryption, authentication and platform management tools will greatly improve the security of Hadoop clusters, and close off all of the easy paths attackers use to steal information or compromise functions. For some challenges, such as authentication, Hadoop provides excellent integration with Active Directory and LDAP services. For authorization, there are modules and services that support fine-grained control over data access, thankfully moving beyond simple role based access controls and making application developers jobs far easier. The Hadoop community has largely embraced security, and done so far faster than we imagined possible on 2012.

On security implementation strategies, when we speak with Hadoop architects and IT managers, we still hear that their most popular security model is to hide the entire cluster with network segmentation, and hope attackers can’t get by the application. But that’s not such a bad thing as almost every one we spoke with has continuously evolved other areas of security within their cluster. And much like Hadoop itself, administrators and cluster architects are getting far more sophisticated with security. Most we spoke have road-mapped all of these recommended controls, and are taking that next step to fulfill compliance obligations. Consider the recommendations above a minimum set of preventative security measures. These are easy to recommend — they are simple, cost-effective, and scalable, and they addresses real security deficiencies with big data clusters. Nothing suggested here harms performance, scalability, or functionality. Yes, they are more work to set up, but relatively simple to manage and maintain.

We hope you find this paper useful. If you have any questions or want to discuss specifics of your situation, feel free to send us a note at info@securosis.com

—Adrian Lane

Monday, February 15, 2016

Securing Hadoop: Enterprise Security For NoSQL

By Adrian Lane

Hadoop is now enterprise software.

There, I said it. I know lots of readers in the IT space still look at Hadoop as an interloper, or worse, part of the rogue IT problem. But better than 50% of the enterprises we spoke with are running Hadoop somewhere within the organization. A small percentage are running Mongo, Cassandra or Riak in parallel with Hadoop, for specific projects. Discussions on what ‘big data’ is, if it is a viable technology, or even if open source can be considered ‘enterprise software’ are long past. What began as proof of concept projects have matured into critical application services. And with that change, IT teams are now tasked with getting a handle on Hadoop security, to which they response with questions like “How do I secure Hadoop?” and “How do I map existing data governance policies to NoSQL databases?”

Security vendors will tell you both attacks on corporate IT systems and data breaches are prevalent, so with gobs of data under management, Hadoop provides a tempting target for ‘Hackers’. All of which is true, but as of today, there really have not been major data breaches where Hadoop play a part of the story. As such this sort of ‘FUD’ carries little weight with IT operations. But make no mistake, security is a requirement! As sensitive information, customer data, medical histories, intellectual property and just about every type of data used in enterprise computing is now commonly used in Hadoop clusters, the ‘C’ word (i.e.: Compliance) has become part of their daily vocabulary. One of the big changes we’ve seen in the last couple of years with Hadoop becoming business critical infrastructure, and another – directly cause by the first – is IT is being tasked with bringing existing clusters in line with enterprise compliance requirements.

This is somewhat challenging as a fresh install of Hadoop suffers all the same weak points traditional IT systems have, so it takes work to get security set up and the reports being created. For clusters that are already up and running, no need to choose technologies and a deployment roadmap that does not upset ongoing operations. On top of that, there is the additional challenge that the in-house tools you use to secure things like SAP, or the SIEM infrastructure you use for compliance reporting, may not be suitable when it comes to NoSQL.

Building security into the cluster

The number of security solutions that are compatible – if not outright built for – Hadoop is the biggest change since 2012. All of the major security pillars – authentication, authorization, encryption, key management and configuration management – are covered and the tools are viable. Most of the advancement have come from the firms that provide enterprise distributions of Hadoop. They have built, and in many cases contributed back to the open source community, security tools that accomplish the basics of cluster security. When you look at the threat-response models introduced in the previous two posts, every compensating security control is now available. Better still, they have done a lot of the integration legwork for services like Kerberos, taking a lot of the pain out of deployments.

Here are some of the components and functions that were not available – or not viable – in 2012.

  • LDAP/AD Integration – Technically AD and LDAP integration were available in 2012, but these services have both been advanced, and are easier to integrate than before. In fact, this area has received the most attention, and integration is as simple as a setup wizard with some of the commercial platforms. The benefits are obvious, as firms can leverage existing access and authorization schemes, and defer user and role management to external sources.
  • Apache Ranger – Ranger is one of the more interesting technologies to come available, and it closes the biggest gap: Module security policies and configuration management. It provides a tool for cluster administrators to set policies for different modules like Hive, Kafka, HBase or Yarn. What’s more, those policies are in context to the module, so it sets policies for files and directories when in HDSF, SQL policies when in Hive, and so on. This helps with data governance and compliance as administrators set how a cluster should be used, or how data is to be accessed, in ways that simple role based access controls cannot.
  • Apache Knox – You can think of Knox in it’s simplest form as a Hadoop firewall. More correctly, it is an API gateway. It handles HTTP and REST-ful requests, enforcing authentication and usage policies of inbound requests, and blocking everything else. Knox can be used as a virtual moat’ around a cluster, or used with network segmentation to further reduce network attack surface.
  • Apache Atlas – Atlas is a proposed open source governence framework for Hadoop. It allows for annotation of files and tables, set relationships between data sets, and even import meta-data from other sources. These features are helpful for reporting, data discovery and for controlling access. Atlas is new and we expect it to see significant maturing in coming years, but for now it offers some valuable tools for basic data governance and reporting.
  • Apache Ambari – Ambari is a facility for provisioning and managing Hadoop clusters. It helps admins set configurations and propagate changes to the entire cluster. During our interviews we we only spoke to two firms using this capability, but we received positive feedback by both. Additionally we spoke with a handful of companies who had written their own configuration and launch scripts, with pre-deployment validation checks, usually for cloud and virtual machine deployments. This later approach was more time consuming to create, but offered greater capabilities, with each function orchestrated within IT operational processes (e.g.: continuous deployment, failure recovery, DevOps). For most, Ambari’s ability to get you up and running quickly and provide consistent cluster management is a big win and a suitable choice.
  • Monitoring – Hive, PIQL, Impala, Spark SQL and similar modules offer SQL or pseudo-SQL syntax. This means that the activity monitoring, dynamic masking, redaction and tokenization technologies originally developed for relational platforms can be leveraged by Hadoop. The result is we can both alert and block on misuse, or provide fine-grained authorization (i.e.: beyond role based access) by altering queries or query result sets based upon users metadata. And, as these technologies are examining queries, they offer an application-centric view of events that is not always capture in log files.

Your first step in addressing these compliance concerns is mapping your existing governance requirements to a Hadoop cluster, then deciding on suitable technologies to meet data and IT security requirements. Next you will need to deploy technologies that provide security and reporting functions, and setting up the policies to enforce usage or detect misuse. Since 2012, many technologies have come available which can address common threats without killing scalability and performance, so there is no need to reinvent the wheel. But you will need to assemble these technologies into complete program, so there is work to be done. Let’s sketch out some over-arching strategies, then provide a basic roadmap to get there.

Security Models

Walled Garden

Walled Garden

The most common approach today is a ‘walled garden’ security model. You can think of this as the ‘moat’ model used for mainframe security for many years; place the entire cluster onto its own network, and tightly control logical access through firewalls or API gateways, and access controls for user or app authentication. In practice with this model there is virtually no security within the NoSQL cluster itself, as data and database security is dependent upon the outer ‘protective shell’ of the network and applications that surrounds the database. The advantage is simplicity; any firm can implement this model with existing tools and skills without performance or functional degradation to the database. On the downside, security is fragile; once a failure of the firewall or application occurs, the system is exposed. This model also does not provide means to prevent credentialed users from misusing the system or viewing/modifying data stored in the cluster. For organizations not particularly worried about security, this is a simple, cost effective approach.

Cluster Security

Unlike relational databases which function like a black-box, Hadoop exposes it’s skeleton to the network. Inter-node communication, replication and other cluster functions occur between many machines, through different types of services. Securing a Hadoop cluster is more akin to securing an entire data center than a traditional database. That said, for the best possible protection, building security into cluster operations is critical. This approach leverages security tools built in – or third-party products integrated into – the NoSQL cluster. Security in this case is systemic and built to be part of the base cluster function.

Built-in Security

Tools may include SSL/TLS for secure communication, Kerberos for node authentication, transparent encryption for data at rest security, and identity and authorization (e.g.: groups, roles) management just to name a few. This approach is more difficult as there are a lot more moving parts and areas where some skill are required. The setup of several security functions targeted at specific risks to the database infrastructure takes time. And, as third-party security tools are often required, typically more expensive. However, it does secure a cluster from attackers, rogue admins, and the occasional witless application programmer. It’s the most effective, and most comprehensive, approach to Hadoop security.

Data Centric Security

Big data systems typically share data from dozens of sources. As firms do not always know where their data is going, or what security controls are in place when it is stored, they’ve taken steps to protect the data regardless of where it is used. This model is called data-centric security because the controls are part of the data, or in some cases, the presentation layer of a database.

Data centric security

The three basic tools that support data-centric security are tokenization, masking and data element encryption. You can think of tokenization just like a subway or arcade token; it has no cash value but can be used to ride the train or play a game. In this case a data token is provided in lieu of sensitive data – it’s commonly used in credit card processing systems to substitute credit card numbers. The token has no intrinsic value other than as a reference to the original value in some other (e.g.: more secure) database. Masking is another very common tool used to protect data elements while retaining the aggregate value of a data set. For example, firms may substitute an individual’s social security number with a random number, or change their name randomly from a phone book, or replace a date value with a random date within some range. In this way the original – sensitive – data value is removed entirely from query results, but the value of the data set is preserved for analysis. Finally, data elements can be encrypted and passed without fear of compromise; only legitimate users with the right encryption keys can view the original value.

The data-centric security model provides a great deal of security when the systems that process data cannot be fully trusted. And many enterprises, given the immaturity of the technology, do not fully trust big data clusters to protect information. But a data-centric security model requires careful planning and tool selection, as it’s more about information lifecycle management. You define the controls over what data can be moved, and what protection must be applied before it is moved. Short of deleting sensitive data, this is the best model when you must populate a big data cluster for analysis work but cannot guarantee security.

These models are very helpful for conceptualizing how you want to approach cluster security. And they are really helpful when trying to get a handle on resource allocation; what approach is your IT team comfortable managing and what tools do you have the funds to acquire? That said, the reality is that firms no longer wholly adhere to any single model. They use a combination of two. For some firms we interviewed they used application gateways to validate requests, and they use IAM and transparent encryption to provide administrative Segregation of Duties on the back end. In another case, the highly multi-tenent nature of the cluster meant they relied heavily on TLS security for session privacy, and implemented dynamic controls (e.g.: masking, tokenization and redaction) for fine grained controls over data.

In our next post we will close this series with a set of succinct technical recommendations which help for all use cases.

—Adrian Lane

Friday, February 12, 2016

The Summary is dead. Long live the Summary!

By Rich

As part of our changes at Securosis this year, it’s time to say goodbye to the old Friday Summary, and hello to the new one. Adrian and I started the Summary way back before Mike joined the company, as our own version of his weekly Security Incite. Our objective was to review the highlights of the week, both our work and things we found on the Internet, typically with an introduction based on events in our personal lives.

As we look at growing and changing our focus this year, it’s time for a different format. Mike’s Incite (usually released on Wednesdays) does a great job highlighting important security stories, or whatever we find interesting. The Summary has always overlapped a bit. We also developed a tendency to overstuff it with links.

Moving forward we are switching gears, and the Summary will now focus on our main coverage areas: cloud, DevOps, and automation security. The new sections will be more tightly curated and prioritized, to better fit a weekly newsletter format for folks who don’t have time to keep up on everything.

We plan to keep the Incite our source for general security industry analysis, with the revised Summary targeting our new focus areas. We are also changing our email list provider from Aweber to MailChimp due to an ongoing technical issue. As part of that switch we will soon offer more email subscription options, which we used to have. You can pick the daily digest of all our posts, the weekly Incite, and/or the weekly Summary. If you want to subscribe directly to the Friday Summary only, just click here.

If you have any feedback, as always please feel free to leave a comment or email us at //info@securosis.com.

And don’t forget:

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us; if you have submissions please email them to //info@securosis.com.

We are still looking at how we want to handle logging as we rearchitect securosis.com. Our friend Matt J. recommended I look at the fluentd open source log collector. It looks like a good replacement for Logstash, which is pretty heavy and can be hard to configure. You can pump everything into fluentd in an instance, container, or auto-scaled cluster if you need it. It can perform analysis right there, plus you can send them down the chain to things like ElasticSearch/Kibana, AWS Kinesis, or different kinds of storage.

What I really like is how it normalizes data into JSON as much as possible, which is great because that’s how we are structuring all our Trinity application logs.

Our plan is to use fluentd with some basic rules for securosis.com, pushing the logs into AWS hosted ElasticSearch (to reduce management overhead), and then Kibana to roll our own SIEM. We see a bunch of clients following a similar approach. This also fits well into cloud logging architectures where you collect the logs locally and only send alerts back to the SOC. Especially with S3 support, that can really reduce overall costs.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting our RSA Conference Guide on the RSA Conference blog – here are the latest posts:

Training and Events

—Rich

Wednesday, February 10, 2016

Securing Hadoop: Operational Security Issues

By Adrian Lane

Beyond the architectural security issues endemic to Hadoop and NoSQL platforms discussed in the last post, IT teams expect some common security processes and supporting tools familiar from other data management platforms. That includes “turning the dials” on configuration management, vulnerability assessment, and maintaining patch levels across a complex assembly of supporting modules. The day-to-day processes IT managers follow to ensure typical application platforms are properly configured have evolved over years – core platform capabilities, community contributions, and commercial third-party support to fill in gaps. Best practices, checklists, and validation tools to verify things like admin rights are sufficiently tight, and that nodes are patched against known and perhaps even unknown vulnerabilities. Hadoop security has come a long way in just a few years, but it still lacks the maturity in day to day operational security offerings, and it is here that we find most firms continue to struggle.

The following is an overview of the most common threats to data management systems, where operational controls offer preventative security measures to close off most common attacks. Again we will discuss the challenges, then map them to mitigation options.

  • Authentication and authorization: Identity and authentication are central to any security effort – without them we cannot determine who should get access to data. Fortunately the greatest gains in NoSQL security have been in identity and access management. This is largely thanks to providers of enterprise Hadoop distributions, who have performed much of the integration and setup work. We have evolved from simple in-database authentication and crude platform identity management to much better integrated LDAP, Active Directory, Kerberos, and X.509 based authentication options. Leveraging those capabilities we can use established roles for authorization mapping, and sometimes extend to fine-grained authorization services with Apache Sentry, or custom authorization mapping controlled from within the calling application the database.
  • Administrative data access: Most organizations have platform administrators and NoSQL database administrators, both with access to the cluster’s files. To provide separation of duties – to ensure administrators cannot view content – a facility is needed to segregate administrative roles and keep unwanted access to a minimum. Direct access to files or data is commonly addressed through a combination of role based-authorization, access control lists, file permissions, and segregation of administrative roles – such as with separate administrative accounts, bearing different roles and credentials. This provides basic protection, but cannot protect archived or snapshotted content. Stronger security requires a combination of data encryption and key management services, with unique keys for each application or cluster. This prevents different tenants (applications) in a shared cluster from viewing each other’s data.
  • Configuration and Patch Management: With a cluster of servers, which may have hundreds of nodes, it is common to run different configurations and patch levels at one time. As nodes are added we see configuration skew. Keeping track of revisions is difficult. Existing configuration management tools can cover the underlying platforms, and HDFS Federation will help with cluster management, but they both leave a lot to be desired – including issuing encryption keys, avoiding ad hoc configuration changes, ensuring file permissions are set correctly, and ensuring TLS is correctly configured. NoSQL systems do not yet have counterparts for the configuration management tools available for relational platforms, and even commercial Hadoop distributions offer scant advice on recommended configurations and pre-deployment checklists. But administrators still need to ensure configuration scripts, patches, and open source code revisions are consistent. So we see NoSQL databases deployed on virtual servers and cloud instances, with home-grown pre-deployment scripts. Alternatively a “golden master” node may embody extensive configuration and validation, propagated automatically to new nodes before they can be added into the cluster. Software Bundles: The application and Hadoop stacks are assembled from many different components. Underlying platforms and file systems also vary – with their own configuration settings, ownership rights, and patch levels. We see organizations increasingly using source code control systems to handle open source version management and application stack management. Container technologies also help developers bundle up consistent application deployments.
  • Authentication of applications and nodes: If an attacker can add a new node they control to the cluster, they can exfiltrate data from the cluster. To authenticate nodes (rather than users) before they can join a cluster, most firms we spoke with either employ X.509 certificates or Kerberos. Both can authenticate users as well, but we draw this distinction to underscore the threat of rogue applications or nodes being added to the cluster. Deployment of these services brings risks as well. For example if a Kerberos keytab file can be accessed or duplicated – perhaps using credentials extracted from virtual image files or snapshots – a node’s identity can be forged. Certificate-based identity options implicitly complicate setup and deployment, but properly deployed they can provide strong authentication and stronger security.
  • Audit and Logging: If you suspect someone has breached your cluster, can you detect it, or trace back to the root cause? You need an activity record, which is usually provided by event logging. A variety of add-on logging capabilities are available, both open source and commercial. Scribe and LogStash are open source tools which integrate into most big data environments, as do a number of commercial products. You can leverage the existing cluster to store logs, build an independent cluster, or even leverage other dedicated platforms like a SIEM or Splunk. That said, some logging options do not provide an auditor sufficient information to determine exactly what actions occurred. You will need to verify that your logs are capturing both the correct event types and user actions. A user ID and IP address are insufficient – you also need to know what queries were issued.
  • Monitoring, filtering, and blocking: There are no built-in monitoring tools to detect misuse or block malicious queries. There isn’t even yet a consensus on what a malicious big data query looks like – aside from crappy MapReduce scripts written by bad programmers. We are just seeing the first viable releases of Hadoop activity monitoring tools. No longer the “after-market speed regulators” they once were, current tools typically embedded into a service like Hive or Spark to capture queries. Usage of SQL queries has blossomed in the last couple years, so we can now leverage database activity monitoring technologies to flag misuse or even block it. These tools are still very new, but the approach has proven effective on relational platforms, and implementations should improve with time.
  • API security: Big data cluster APIs need to be protected from code and command injection, buffer overflow attacks, and all the other web service attacks. This responsibility typically rests on the applications using the cluster, but not always. Common security controls include integration with directory services, mapping OAuth tokens to API services, filtering requests, input validation, and managing policies across nodes. Some people leverage API gateways and whitelist allowable application requests. Again a handful of off-the-shelf solutions can help address API security, but most of the options are based on a gateway funneling all users and requests through a single interface (choke-point). Fortunately modern DevOps techniques for application stack patching and pre-deployment validation are proving effective at addressing application and cluster security issues. There are a great many API security issues, but a full consideration is beyond our scope for this paper

Threat-response models

There are one or more security countermeasures to mitigate each of the threats identified above. This diagram shows some options at your disposal.

Threat response model

Our next post will discuss strategic security models and how some of these security technologies are deployed. You will see how some deployments aim for simplicity and ease of management, rather than attempting to achieve the highest security they can. For example some firms use Kerberos to uniquely identify nodes on the cluster and leverage Kerberos certificates to prove identity. Others issue a TLS certificate to each node before adding it to the cluster – this provides bidirectional identification between nodes and session encryption, but not really node authentication. Kerberos offers much stronger identity security and enforcement, with a greater setup and management cost.

—Adrian Lane

Friday, February 05, 2016

Summary: Die Blah, Die!!

By Rich

Rich here.

I was a little burnt out when the start of this year rolled around. Not “security burnout” – just one of the regular downs that hit everyone in life from time to time. Some of it was due to our weird year with the company, a bunch of it was due to travel and impending deadlines, plus there was all the extra stress of trying to train for a marathon while injured (and working a ton).

Oh yeah, and I have kids. Two of whom are in school. With homework. And I thought being a paramedic or infosec professional was stressful?!?

Even finishing the marathon (did I mention that enough?) didn’t pull me out of my funk. Even starting the planning for Securosis 2.0 only mildly engaged my enthusiasm. I wasn’t depressed by any means – my life is too awesome for that – but I think many of you know what I mean. Just a… temporary lack of motivation.

But last week it all faded away. All it took was a break from airplanes, putting some new tech skills into practice, and rebuilding the entire company.

A break from work travel is kind of like the reverse of a vacation. The best vacations are a month long – a week to clear the head, two weeks to enjoy the vacation, a week to let the real world back in. A gap in work travel does the same thing, except instead of enjoying vacation you get to enjoy hitting deadlines. It’s kind of the same.

Then I spent time on a pet technical project and built the code to show how event-driven security can work. I had to re-learn Python while learning two new Amazon services. It was a cool challenge, and rewarding to build something that worked like I hoped. At the same time I was picking up other new skills for my other RSA Conference demos.

The best part was starting to rebuild the company itself. We’re pretty serious about calling this our “Securosis 2.0 pivot”. The past couple weeks we have been planning the structure and products, building out initial collateral, and redesigning the website (don’t worry – with our design firm). I’ve been working with our contractors to build new infrastructure, evaluating new products and platforms, and firming up some partnerships. Not alone – Mike and Adrian are also hard at work – but I think my pieces are a lot more fun because I get the technical parts.

It’s one thing to build a demo or write a technical blog post, but it’s totally different to be building your future. And that was the final nail in the blah’s coffin.

A month home. Learning new technical skills to build new things. Rebuilding the company to redefine my future. It turns out all that is a pretty motivating combination, especially with some good beer and workouts in the mix, and another trip to see Star Wars (3D IMAX with the kids this time).

Now the real challenge: seeing if it can survive the homeowner’s association meeting I need to attend tonight. If I can make it through that, I can survive anything.

Photo credit: Blah from pinterest

And now on to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Andy, in response to Event-Driven AWS Security: A Practical Example.

Cool post. We could consider the above as a solution to an out of band modification of a security group. If the creation and modification of all security groups is via Cloudformation scripts, a DevOps SDLC could be implemented to ensure only approved changes are pushed through in the first place. Another question is how does the above trigger know the modification is unwanted?! It’s a wee bugbear I have with AWS that there’s not currently a mechanism to reference rule functions or change controls.

My response:

I actually have some techniques to handle out of band approvals, but it gets more advanced pretty quickly (plan is to throw some of them into Trinity once we start letting anyone use it).

One quick example… build a workflow that kicks off a notification for approval, then the approval modifies something in Dynamo or S3, then that is one of the conditionals to check. E.g. have your change management system save down a token in S3 in a different account, then the Lambda function checks that.

You get to use cross-account access for separation of duties. Gets complicated quickly, which is why we figure we need a platform to manage it all.

—Rich

Wednesday, February 03, 2016

Incite 2/3/2016: Courage

By Mike Rothman

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.

In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.

Courage

I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.

Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.

Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.

Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.

Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich

–Mike

Photo credit: “Courage” from bfick


It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

* The Future of Security

Incite 4 U

  1. Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR

  2. Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL

  3. Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR

  4. Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL

—Mike Rothman