Login  |  Register  |  Contact
Friday, March 20, 2015

New! Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers, & Applications

By Rich

Woo Hoo! It’s New Paper Friday!


Over the past month or so you have seen Adrian and myself put together our latest work on encryption. This one is a top-level overview designed to help people decide which approach should work best for datacenter projects (including servers, storage, applications, cloud infrastructure, and databases). Now we have pieced it together into a full paper.

We’d like to thank Vormetric for licensing this content. As always we wrote it using our Totally Transparent Research process, and the content is independent and objective. Download the full paper.

Here’s an excerpt from the opening:

Today we see encryption growing at an accelerating rate in data centers, for a confluence of reasons. A trite way to summarize them is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even by their own governments).

Thanks to increasing demand we have a growing range of options, as vendors and even free and Open Source tools address this opportunity. We have never had more choice, but with choice comes complexity – and outside your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it all meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center: applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered it in depth in another paper. We will also cover tokenization and discuss its relationship with encryption.

We would like to thank Vormetric for licensing this paper, which enables us to release it for free. As always, the content is completely independent and was created in a series of blog posts (and posted on GitHub) for public comment.


Summary: Crunch Time

By Rich

I’ve had one conversation about 8 times this week:

“Ready for RSA?”

“Not even close.”

“Yeah, figured it would be better since they pushed it out an extra month, but not so much.”

For those who don’t know, the RSA conference is the biggest event in our industry. Usually it’s in February or March, but this year it’s in April. A full extra month to prep presentations, or marketing material for vendors (my end-user friends who aren’t presenting don’t worry about any of this). Plus there are all the community things, like the Security Blogger’s Meetup, our Disaster Recovery Breakfast, and so on.

Seems like we all just pushed everything back a month, and if anything are even further behind than usual. Or maybe that’s just me, a pathological procrastinator.

So I don’t have time for the usual Summary this week. Especially because we have a ton of projects going on concurrently, and I’m about to start bouncing around the country again for client projects. The travel itself isn’t exciting but the projects themselves are. Most of my trips are to help end-user orgs build out their cloud security strategy and tactics. It’s a big change from Gartner, when I never got to roll up my sleeves and dig in deep. The fascinating bit is the kinds of organizations who are moving to cloud (mostly AWS, because that’s where I’m deepest technically). Instead of being startups these are established companies, some quite large, and a few heavily regulated. I knew we’d get here someday, but I didn’t expect cloud adoption to hit these segments so soon.

Mike and Adrian are just as busy as I am, which is why the blog is so slow, but some new projects are about to hit. We’ve also been working on our annual RSA Guide, which you will start seeing pieces of soon. This year our Contributing Analysts wrote a lot of the content.

But hey, we’ve been around 8+ years and still put up multiple blog posts a week, even when things are ugly. So we have that going for us.

Which is nice.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Tom, in response to My $500 Cloud Security Screwup–UPDATED.

Great writeup – being able to admit you made a mistake is very hard for some, but we all do, bravo for being up front about it.

AWS (Amazon, in general) has always been really super super reasonable about charges with me – I too have had them reverse a charge (in my case, for Amazon prime that I didn’t really use) that was totally on my own shoulders, without me asking – good on them, it makes me feel very, very comfortable with trusting them to do the right thing. I like to think a big part of it was you posting about this and owning the issue – this is an awesome example of how to handle this sort of situation with integrity and competence.

I suggest the VERY first thing you do with a new AWS account is turn on MFA, make an IAM account, and put the master credentials on a thumb drive in a desk drawer (locked, ideally). Then, use that IAM account to make less-privileged ones, and use those in practice. It is a pain, to be sure, but it is important to lay a good foundation. (I actually have gone further and worked out federated access for our team at work, and ALL credentials that could reasonably be exposed have a very short lifespan – accidentally checked-in creds in code are to our internal auth server, unusable to the real world. It was a pain, but it lets me sleep better.)

You inspire me; I should clean up the federation server and put it out there for others to use.


Wednesday, March 18, 2015

Incite 3/18/2015: Pause

By Mike Rothman

It’s been over a month since I wrote an Incite. It’ is the longest period of downtime since I joined Securosis. I could talk about my workload, which is bonkers right now. But over the years I’ve written the Incite regardless of workload. I could talk about excessive travel, but I haven’t been traveling nearly as much as last year. I could come up with lots of excuses, but as I tell my kids all the time, “I’m not in the excuses business.”

Here’s the reality: I needed a break. I have plenty to write about, but I found reasons not to write. There is a ton of stuff going on in security, so there were many interesting snippets I let fly right on by. But I didn’t write it, and I didn’t really question it. What I needed was what my Tao teacher calls a pause.

Hit the pause button

You could need a pause for lots of reasons. Sometimes you have been running too hard for too long. Sometimes you need to change things up a bit because the status quo makes you unhappy. Sometimes you need some space to recalibrate and figure out what you want to do and where you want to go. Of course, this could be for very little things, like writing the Incite every week. Or very big things. But without taking a pause, you don’t have the space to make objective decisions.

You are reading this, so obviously I am writing the Incite. So during my pause, it became clear that the Incite is an important part of what I do. But it’s bigger than that. It’s an important part of who I am. I have shared the good and the not so good through the years. I have met people who tell me they have experienced what I write about, and it’s helpful for them to commiserate – even if it’s virtual. Some tell me they learn through my Incites, and there is nothing more flattering. But it’s not why I write the Incite.

I write the Incite for me. I always have. It’s a journal of sorts representing my life, my views, and my situation at any given time. Every so often I go back a couple years and read my old stuff. It reminds me of what things were like back then. It’s useful because I don’t spend much time looking backwards. It’s interesting to see how different I am now. Some people journal in private. I do that too. But I have found my public journal is important to me.

The pause is over. I’m pushing Play. In the coming months there will be really cool stuff to share and some stuff that will be hard to communicate. But that’s life. You take the good and the bad without judgement. You move forward. At least I do. So stay tuned. The next few months are going to be very interesting, for so many reasons.


Photo credit: “Pause? 272/265” originally uploaded by Dennis Skley

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Cracking the Confusion

Applied Threat Intelligence

Network Security Gateway Evolution

Newly Published Papers

Incite 4 U

(Note: Don’t blame Rich or Adrian for the older Incite… They got me stuff on time – it just took me a month to post it. You know, that pause I talked about above.)

  1. There are no perfect candidates… There is no such thing as perfect security, so why would there be perfect security candidates? Our friend Andy Ellis, CISO of Akamai, offers a refreshing perspective on recruiting security professionals. Andy focuses on passion over immediate competence. If a person loves what they do they can learn the rest. I think that’s great, especially given the competition for those with the right certifications and keywords on their CVs. Andy also chooses to pay staffers fairly instead of pushing them to find other jobs as their skills increase. Again, very smart given the competition for security staff. The #1 issue we hear from CISO types, over and over, is the lack of staff / recruiting challenge. So you need to find folks in places others aren’t looking, and invest in them – knowing a few will leave for greener pastures at some point. That’s all part of the game. – MR

  2. No love: Another encryption vendor got rolled up recently, with Voltage security acquired by HP. But before you lose your train of thought, with jokes about how HP is where tech companies go to die – yeah, we heard a lot of that in the last 24 hours – note this is occurring with encryption firms of all sizes. In case you missed it, Porticor was acquired by Intuit the week before the HP/Voltage deal. And before that, Safenet to Gemalto, Entrust to Datacard, and Gazzang went to Cloudera. You would think selling data encryption in the age of data breaches would be like giving ice cream to kids on a hot day, but the truth is selling is hard because implementing it is hard. Customers view encryption as a commodity, with one AES variant the same as every other, and complain bitterly about cost and key management headaches. Encryption platforms have matured steadily over the last 10 years, and continually evolved to include format preserving encryption, tokenization, transparent encryption, dynamic masking, key storage, and management, all while integrating with storage systems, apps, applications, cloud services and ‘big data’. The trend is clearly to bake data encryption in, but innovation and growing demand for data security mean this market is far from settled. – AL

  3. Bring Your Own Key: I’m a big fan of the cloud, and of encryption, which is why I’m excited to see Box announce their new Enterprise Key Management product. First a little full disclosure: I have known about this for a while and I done some work with Box (which was not a secret). That said, it isn’t like I get paid more if anyone buys the service from them. I’ve been on record for a few years as not a fan of proxy-based encryption for cloud computing. Shoving an appliance (or service) between your users and the cloud platform so you can encrypt a few fields seems like a kludge prone to breaking application functionality. But almost no providers allow customers to manage their own encryption in a way that can protect against misuse by the provider (or snoops, criminal or government). Box’s EKM enables customers to control their own encryption keys, but all the actual work happens within Box. This reduces the likelihood the application will break. It isn’t necessarily completely subpoena proof, but there is no way for anyone besides you to see your data unless you release the key. Amazon is one of the only other cloud providers supporting customer managed keys, and I really hope this trend grows. But as Mike says, “Hope is not a strategy”, so vote with your dollars if you want more customer-controlled cloud key management. – RM

  4. Vulnerability management, still kicking…: I have voiced my disappointment with the fact that modern product reviews are consistently cursory, and rarely useful for procurement decisions. That doesn’t stop folks like SC Mag from continuing to review products, like their recent Vulnerability Management review. Yes, vulnerability management is still a thing – even if Gartner doesn’t think so anymore. That being said, the major players in the market are changing direction, and they all seem to be going in different directions. One is climbing the stack, another focused on identity, a third morphing into a services driven shop, and yet another preoccupied with executive level dashboards. And yes, they all still scan your stuff and generate long reports of stuff you’ll never get to. Same old, same old. Although as you are looking to renew your product and/or service, it makes sense to actually learn about the longer term strategy of your chosen vendor to ensure it still aligns with what you need. If not, make a change since it’s not like all of the vendors can’t scan your stuff. – MR

  5. Smart cards, disrupted: It’s happening again; the threat of EMV cards. The Smart Card Alliance position is the liability shift for not using EMV will push adoption within mass merchants, while Visa representatives claim 525 million cards will be in the ‘ecosystem’ by the end of 2015. Bull$#!*. For the sake of round numbers say there are about 300 million US citizens – minus those under 18 – which would require each US adult to get two Chip and PIN cards over the next 10 months. Even if the US government issues an ID for every citizen, that milestone is not going to happen. Nor will merchants move fast enough with new terminals to support the cards. I understand the smart card industry’s angst – EMV needs to move or be get over in the US. Apple Pay basically virtualized Chip and PIN for payments, simultaneously showing consumers a model for health and ID cards pushed into mobile devices with less cost and pain. It’s not a new idea by any stretch, but Apple upended a bunch of firms who were positioning for the future. As Apple does from time to time. – AL

  6. Eye of Sauron: Big breaches happen, and no matter what anyone tells you they aren’t going way… ever. The goal of your security program is to minimize the potential damage because it can’t be eliminated. Even with all the high-profile breaches, there’s a lack of motivation for companies, even in regulated industries, to protect their data. Everyone ignored the HIPAA security requirements for years and years, until HITECH put baby teeth in place. But heck, with entirely too many friends still in healthcare, even that threat isn’t enough to be a true catalyst for action. So I’m always interested in events that change the economics of security. Like one of the biggest insurance markets taking a close look at insurer cybersecurity. Nothing may happen here – it isn’t like Elliot Spitzer is back in charge, kicking ass and (er… spanking… no… not going to say it) taking names (no mention of black books either…), but it only takes a couple state regulators in the right markets to move the needle and drive change. – RM

—Mike Rothman

Monday, March 16, 2015

Firestarter: Cyber Cash Cow

By Rich

Last week we saw a security company hit the $2.4B valuation level. Yes, that’s a ‘B’, as in billion. This week we dig into the changing role of money and investment in our industry, and what it might mean. We like to pretend keeping our heads down and focusing on defense and tech is all that matters, but practically speaking we need to keep half an eye on the market around us. It not only affects the tools at our disposal, but influences the entire course of our profession.

Watch or listen:


Tuesday, March 10, 2015

Take Control of Security for Mac Users

By Rich

I spend a lot of time on Apple security, more for personal reasons than anything else. They are the tools I use every day, and where I send most of my friends and family to manage their digital lives, so my investment runs deeper than anything financial. I have been the Security Editor over at TidBITS since about the time I founded Securosis, but I am not the only security expert over there. Joe Kissell has himself written books on the topic, and plenty of articles (mostly at TidBITS and Macworld).

Joe is currently writing a Take Control book on Mac security. The Take Control series of books are my favorite hands-on instructional guides, and I have used a fair few myself (Take Control is distinct from TidBITS, but closely related and run by the same team).

The first two chapters are available free online at TidBITS. The rest of the chapters become available to TidBITS members as Joe writes them. These books run much deeper than the white papers and articles we post on Securosis. The book a soup-to-nuts hands-on guide for nearly everything you need to know to secure your own Mac.

Joe and I have talked about combining efforts for a Securosis/Take Control cross-branded version of the content if we can line up a licensee/sponsorship. If you are interested drop me a line.


Monday, March 09, 2015

Be Careful What You Wish For, It’s the SEVENTH Annual Disaster Recovery Breakfast

By Mike Rothman

2015 DRB, the be careful what you wish for edition

There seems to something missing for us Securosis folks now that it’s the beginning of March. After some reflection we realized it’s that dull ache in our livers from surviving yet another RSA Conference. The show organizers had to move the conference to April this year, to ensure a full takeover of San Francisco. Regardless of when the conference is, there is one thing you can definitely count on: the DRB!

That’s right – once again Securosis and friends are hosting our RSA Conference Disaster Recovery Breakfast. This is the seventh year for this event, and we are considering delivering a bloody head to Jillian’s in homage to Se7en. Maybe that wouldn’t be the best idea – it might ruin our appetites. Though given how big the DRB has become, we probably should consider tactics to cut back – we pay for insane amounts of bacon.

Kidding aside, we are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the glitzy show floor and club scene that is now the RSAC. By Thursday, if you’re anything like us, you will be a disaster and need to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Did we mention there will be bacon?

With the continued support of MSLGROUP and Kulesa Faul, as well as our new partner LEWIS PR, we are happy to provide an oasis in a morass of hyperbole, booth babes, and tchotchke hunters.

As always, the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted recovery items (non-prescription only) to ease your day. Yes, the bar will be open – Mike gets very grumpy if a mimosa is not waiting for him on arrival (and every 10 minutes thereafter).

Remember what the DR Breakfast is all about. No marketing, no spin, just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, we are confident you will enjoy the DRB as much as we do.

See you there.

To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.

—Mike Rothman

SecDevOps Learning Lab at RSA

By Rich

We were invited to run a two-hour learning lab on a topic of our choice this year at the RSA Conference. I suspect it will surprise… no one… that we chose Pragmatic SecDevOps as our topic.

This is a cool opportunity – it gives us a double-length session to mix in presentation, hands-on labs, demonstrations, and group activities. I realize some people roll their eyes when they see these buzzwords, but everything we will present is being used in the real world, often at leading-edge organizations. DevOps really is a thing, it really does affect security, and you really can use it to your advantage in super interesting ways.

Here is the official description.

Pragmatic SecDevOps

Date & Time: Wednesday, April 22, 2015, 10:20am-12:20pm

Abstract: As cloud and DevOps disrupt traditional approaches to security, new capabilities emerge to automate and enhance security operations. In this hands-on session attendees will learn pragmatic techniques for leveraging cloud computing and DevOps for improving security. Through a combination of demonstrations and exercises we will work through a string of real-world security automations.

We are still finalizing what will make the cut but here are some components we are considering including:

  • An updated (and concise) Pragmatic SecDevOps presentation to start the conversation.
  • A lab to automate embedding host security agents in cloud deployments (e.g., Chef/Puppet) and then use them to enforce security policies.
  • A lab to monitor your cloud security management plane.
  • A group exercise to adapt and embed security architectures to leverage new cloud capabilities. This one is interesting because we will be showing off some leading-edge architectures we are starting to see for DevOps and cloud deployments, which not many security people have been exposed to.
  • A security automation group exercise/hands-on lab where we will give you a library of Ruby methods to mix and match for different security functions.

That is a ton of content, and we may not get to all of it. I will streamline some of the labs that I normally have people work through manually in training, but we need to push through more quickly.

You need to pre-register to attend, and we will run a webcast in the beginning of April so people can prepare and be ready to participate in the hands-on sections. One nice thing about the Learning Labs is that they happen during the main conference – not the day before or at the end of the week.

Please feel free to drop us ideas, preferences, or comments below. We already have a lot of the content, but how we piece it together is still very much open to suggestion.


Friday, March 06, 2015

Friday Summary: More Cowbell

By Rich

Rich here.

Not to get too personal, but I had a dream about being back on ski patrol last night.

Of all the rescue things I did, ski patrol was one of the most satisfying. That probably sounds weird, because it means I was more satisfied picking up people who could afford $80 lift tickets than saving people in the inner city. But each activity brings a different kind of satisfaction, and when it comes to ski patrol, it was all about the independence.

I worked patrol part time at Copper Mountain for 5 years. We were pseudo-volunteers who would do everything full-timers did, except drive snowmobiles and throw bombs. Although some of us did get certified to drive (to ferry athletes and photographers at special events) and we could go out on avalanche control – just not light the boom-boom things.

Patrol is a physically demanding job. You don’t turn laps all day; if you aren’t on a work mission (fixing trail markers, setting safety gear, etc.), you hang out in one of the patrol buildings until you hear the dispatcher ring the cowbell. Yes, more cowbell. Someone would then snag the 1050 (injured person), get details, grab a rig (toboggan), and go find the patient.

It’s all solo after that. You ski (or in my case snowboard) to the patient, assess them, treat them, load them, and then take them to the base to either release or send to the clinic. Help is always available via radio if you need it, such as having a second person grab the tow line on the rig in really nasty conditions (usually a cross-slope traverse on ice), or if you hit CPR levels of badness, but otherwise it is a solo deal.

I loved working the back bowls. They were physically much tougher, but the environment was amazing. The main patrol building was called Motel 6, at around 12,000 feet. Just getting to it usually involved a hike. It wasn’t very large, but held a table, couch, and small kitchenette area. If you worked there, you wore an avalanche beacon and carried a shovel. Directly across the bowl from 6 was The Dumpster: two lift shack halves welded together with some crash pads on the floor and walls to sit on. Getting to The Dumpster took about 45 minutes and involved hiking the entire ridge around, topping out over 12,500’. The year I lived in Phoenix and flew back to work weekends… that hurt.

One of my most memorable calls was my first solo mission out of 6. Some guy injured his leg down near the bottom. Getting to him with the rig was easy, but getting out more complex. It involved multiple “Doo pulls”. Our snowmobiles were all Ski-Doos, and for a Doo pull, the driver would throw you a tow rope. You cannot safely tie it onto the rig, so you get in between the horns (handlebars) and wrap the end of the rope around one grip in such a way that it will only stay while you keep a firm hold on it. Then you handle steering. Fall, and you will probably get run over before momentum (or your head) stops the rig, after the rope drops off.

So I got towed out of the bowl, boarded the patient to my next pickup point, towed up to a better spot to reach the mountain base, and then followed the runs all the way down. It took well over an hour, on a hill I could ride top to bottom in under 10 minutes.

I don’t completely understand why this was so much more satisfying than working the ambulance or even a complex, multi-day mountain rescue. Perhaps because there are few cases in emergency services where you can honestly say you were responsible for saving someone. It is almost always a team effort, and real saves are rare. But on patrol I remember the time we were sweeping the hill at the end of the day and I found a girl who had just crashed on one of the big jumps. She wasn’t only unconscious, but she wasn’t breathing. I repositioned her head, opened her airway, and she was fine with a mild concussion.

My call. My patient. My strength and skills tested, with an expectation that I wouldn’t need help beyond the occasional tow if gravity wasn’t there to help. Teamwork is deeply satisfying, but it is also nice to know you can handle things yourself.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Monday, March 02, 2015

Firestarter: Cyber vs. Terror (yeah, we went there)

By Rich

Last week the US Director of National Intelligence said cyberattacks are a greater risk than terrorism. This week we debate what that means, and whether terminology is getting so muddled that it becomes meaningless. Plus we rip into Rich’s post claiming security people need to stop thinking of themselves as warriors, and start thinking like spies.

Watch or listen:


Friday, February 27, 2015

Summary: You’re a Spy, not a Warrior

By Rich

Rich here.

These days it is hard to swing a cyberstick without hearing a cybergasp of cyberstration at the inevitable cyberbuse of the word “cyber”.

To be clear, I think ‘cybersecurity’ is not only an acceptable term, but a particularly suitable one. It is easy to understand and covers aspects of IT security the term “IT security” doesn’t quite describe as well. There are entire verticals which think of IT security as “the stuff in the office” and use other terms for all the other technology that powers their operations.

But snapping cyber onto the front of another word can be misleading. Take, for example, cyberwar and cyberwarrior.

We are, very clearly, engaged in an ongoing long-term conflict with a myriad of threat actors. And I think there is something that qualifies as cyberwar, and even cyberwarriors. Believe it or not, some people with that skill set work in-theater, under arms, and at risk.

But when you dig in this is more a spy’s game than a warrior’s battlefield. Defensive security professionals are engaged more in counterintelligence and espionage than violent conflict, especially because we can rarely definitively attribute attacks or strike back.

Personally, as Han Solo once said, “Bring ‘em on, I’d prefer a straight fight to all this sneaking around”, but it isn’t actually up to me. So I find I need to think as much in terms of counterintelligence as straight-up defense. That’s why I love some of the concepts in active defense, such as intrusion deception – because we can design traps and misdirection for attackers, giving ourselves a better chance to detect and contain them.

Admit it – you love spy movies. And while you probably won’t get the girl in the end (that’s a joke for whoever saw Kingsman), and you aren’t saving the world, you also probably don’t have to worry about someone sticking bamboo under your fingernails.

Until audit season.

I have some family in town and ran out of time to do a proper summary, so I shortened things this week.

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Wednesday, February 25, 2015

Cracking the Confusion: Encryption Decision Tree

By Rich, Adrian Lane

This is the final post in this series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post, and find the other posts under “related posts” in full article view.

Choosing the Best Option

There is no way to fully cover all the myriad factors in picking a specific encryption option in a (relatively) short paper like this, so we compiled a visual decision tree to at least get you into the right bucket.

Here are a few notes on the decision tree.

  • This isn’t exhaustive but should get you looking at the right set of technologies.
  • In all cases you will want secure external key management.
  • In general, for discreet data you want to encrypt as high in the stack as possible. When you don’t need as much separation of duties, encrypting lower may be easier and more cost effective.
  • For both database and cloud encryption, in a few cases we recommend you encrypt in the application instead.
  • When we list multiple options the order of preference is top to bottom.
  • As you use this tree keep the Three Laws in mind, since they help guide the security value of your decision.

Encryption Decision Tree

Once you understand how encryption systems work, the different layers where you can encrypt, and how they combine to improve security (or not), it’s usually relatively easy to pick the right approach.

The hard part is to then architect and implement the encryption technology and integrate it into your data center, application, or cloud service. That’s where our other encryption research can be valuable, and the following reports should help:

Rich, Adrian Lane

Ticker Symbol: Hack - *Updated*

By Gunnar

There is a ticker symbol HACK that tracks a group of publicly traded “Cyber Security” firms. Given how hot everything ‘Cyber’ is, HACK may do just fine – who knows? But perhaps one for breached companies (BRCH?) would be better. For you security geeks out there who love to talk about the cost of breaches, let’s take a look at the stock prices of several big-named firms which have been breached:

Sony 11/24/14 28.3%
S&P 500 11/24/14 2.2%
Home Depot 9/9/14 31.3%
S&P 500 9/9/14 6.4%
Target 12/19/13 23.8%
S&P 500 12/19/13 16.9%
Heartland 1/20/09 250.1%
S&P 500 1/20/09 162.7%
Apple 9/2/14 28%
S&P 500 9/2/14 6%

This is a small sample of companies, but their stock values have each substantially outperformed the S&P 500 (which has been on a tear in the last year or so) from the time of their breaches through now. “How long until activist investors like Icahn pound the table demanding more dividends, stock buy backs and would it kill you to have a breach?” Food for thought.


Friday, February 20, 2015

Summary: Three Mini Gadget Reviews… and a Big Week for Security Fails

By Rich

Rich here,

Before I get into the cold open for this week, the past few days have been pretty nasty for privacy, security, and the digital supply chain. I will have a post on that up soon, but you can skip to the Top News section to catch the main stories. They are essential reading this week, and we don’t say that often.

I am a ridiculous techno-addict, and have been my entire life. I suspect I inherited it from my father, who brought home an early microwave (likely responsible for my hair loss), video tape deck (where I watched Star Wars before VHS was on the market, the year the movie came out), and even a reel to reel videotape camera (black and white) I used for my own directorial debuts… often featuring my Star Wars figures.

Gadgets have always been one of my vices, but as I have grown older they not only got cheaper, but also cheaper than what many of my 40+-year-old peers spend money on (cars, extra houses, extramarital partners for said houses, etc. ). That said, over time I have become a bit more discerning about where I drop money as I have come to better understand my own tastes and needs… and as my kids killed any semblance of hobby time.

For this week’s Summary I thought I’d highlight a few of my current favorite gadgets. This isn’t even close to exhaustive – just a few current favorites.

Logitech Harmony Ultimate Home + Hub – I don’t actually have all that crazy a TV setup, but it’s just complex enough that I wanted a universal remote. We switch a ton between our Apple TV and TiVo Roamio, and our kids are so that young regular remotes are a mess.

The Harmony Ultimate is exactly what the name says. The remote itself is relatively small and has an adaptive touch screen that configures itself to the activity you are in. While it has an infrared transmitter like all remotes, it really uses RF to communicate to the Hub, which is located in our AV cabinet under the TV, and includes an IR blaster to hit all the components.

This setup brings three key advantages. First, you don’t need to worry about where to point the remote. My kids would always lose aim in the middle of a multi-component command (something as simple as turning things on or off) and get frustrated. That’s no longer an issue. Second, the touch screen itself makes a cleaner remote with less buttons. You can prioritize the ones you use on the display, but still access all the obscure ones. Finally, the Hub is network enabled, and pairs with an iOS app. If I can’t find the remote I use my phone and everything looks and works the same. Because children.

I have used earlier Logitech remotes and this is the first one that really delivers on all the promises. It is pricy, but futureproof, and even integrates with home automation products. I also got $80 off during a random Amazon sale. There isn’t anything else like this on the market, and I don’t regret it. We used our last Harmony remote for 7 years with our main TV, and it’s now in another room, so we got our money’s worth.

Garmin Forerunner 920XT – I’m a triathlete. Not a great one by any means, but that’s my sport of choice these days. The Garmin 920XT was my holiday present this year, and it changed how I think about smartwatches.

First, as a fitness tool, it is ridiculous. Aside from the GPS (and GLONASS – thank you, Russian friends), it connects with a ton of sensors, works as a basic smartwatch, and even includes an accelerometer – not only for step tracking, but also run tracking on treadmills and swim stroke tracking in pools.

I didn’t expect to wear it every day but I do. Even getting simple notifications on my wrist means less pulling my phone out of my pocket, and I don’t worry about missing calls when I chase the kids during the work day and leave my phone on my desk. Yes, I’ll switch to an Apple Watch day-to-day when it comes out, but I went on a 17-mile run during working hours this week, and knowing I didn’t miss anything important was liberating.

The 920XT is insane as a fitness tool. It will estimate your VO2 Max and predict race performance based on heart rate variability. It pulls in more metrics than you knew existed (or can use, but it makes us geeks happy). You can expand it with Garmin’s new ConnectIQ app platform. I added a half-marathon race predictor for my last race, and it helped me set a new PR – I am not great at math in the middle of a race. It walks me through structured workouts, then automatically uploads everything via my phone or home WiFi when I’m done, which then syncs to Strava and TrainingPeaks.

If you aren’t a multisport athlete I’d check out the Fenix 3 or Vivoactive. They both support ConnectIQ.

Neato XV-11 Robotic Vacuum – With multiple cats and allergies I was an early Roomba user. It worked well but had some key annoyances. It nearly never found its base to recharge, I’d have to remember to use the “virtual wall” infrared barriers to keep it in a room, and it was a royal pain to clean.

Then I switched to the Neato XV-11 (an older model). It uses a stronger vacuum than the Roomba, is much easier to clean, maps rooms with LIDAR (laser radar), and nearly always finds its base to recharge. It is also much easier to schedule.

The Neato will scan a room, clean until the battery gets low, go back to base, recharge, and then start out again up to 3 times (when it’s running on a schedule). It detects doorways automatically, stays in the room you put it in, and will only hit the next room when it is done.

On the downside I cannot use it on a schedule any more because my cats vomit too much and I don’t want to gum it up. But I still vacuum several more times a week than I would by hand – I pull it out, scan the room for cat puke, move a few dirty socks, and let ‘er rip.

That’s it for this week. Three items I use nearly every day that have nothing to do with Securosis or Apple.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Adrian Lane: Reverse Engineering Apple’s Lightning Connector. To me hacking is about understanding how stuff really works, and modifying it to suit your needs. For good usually, but I understand there are two sides to that coin. And that’s one of the reasons I love Hack-a-day and articles like this – figuring out how the Lightning connector works.
  • Mortman: HTTP/2 is out!
  • Mike: Emerging Products: Threat Intelligence Group Test. This is why we can’t have nice things. I’m old, but I remember when product reviews were actually helpful. At least they provide a short list of products to look at. So there’s that…
  • Dave Lewis: Superfish. Let us know if any of your corporate Lenovos came with this, but we assume all corporate laptops are wiped and get a standard image installed.
  • Rich: How Spies Stole the Keys to the Encryption Castle. As I keep hinting, I need to write this all up tomorrow.

Research Reports and Presentations

Top News and Posts

As I mentioned in the opening, there are some major privacy and security stories this week. Dave Lewis highlighted Superfish, and here are the other main stories you need to read:

And some other stories:

Blog Comment of the Week

This week’s best comment goes to will, in response to Some days, I think we are screwed.

People tend to be stupid, so the smart ones must protect them from themselves :)


Cracking the Confusion: Top Encryption Use Cases

By Rich

This is the sixth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Top Encryption Use Cases

Encryption, like most security, is only adopted in response to a business need. It may be a need to keep corporate data secret, protect customer privacy, ensure data integrity, or satisfy a compliance mandate that requires data protection – but there is always a motivating factor driving companies to encrypt. The principal use cases have changed over the years, but these are still common.


Protecting data stored in databases is a top use case across mainframes, relational, and NoSQL databases. The motivation may be to combat data breaches, keep administrators honest, support multi-tenancy, satisfy contractual obligations, or even comply with state privacy laws. Surprisingly, database encryption is a relatively new phenomenon. Database administrators historically viewed encryption as carrying unacceptable performance overhead, and data security professionals viewed it as a redundant control – only effective if firewalls, identity management, and other security measures all failed. Only recently has the steady stream of data breaches shattered this false impression. Combined with continued performance advancements, multiple deployment options, and general platform maturity, database encryption no longer carries a stigma. Today data sprawls across hundreds of internal databases, test systems, and third-party service providers; so organizations use a mixture of encryption, tokenization, and data masking to tailor protection to each potential threat – regardless of where data is moved and used.

The two best options for encrypting a database are encrypting data fields in the application before sending to the database and Transparent Database Encryption. Some databases support field-level encryption, but the primary driver for database encryption is usually to restrict database administrators from seeing specific data, so organizations cannot rely on the database’s own encryption capabilities.

TDE (via the database feature or an external tool) is best to protect this data in storage. It is especially useful if you need to encrypt a lot of data and for legacy applications where adding field encryption isn’t reasonable.

For more information see Understanding and Selecting a Database Encryption or Tokenization Solution.

Cloud Storage

Encryption is the main data security control for cloud computing. It enables organizations to maintain control over data security, even in multitenant environments. If you encrypt data, and control the key, even your cloud provider cannot access it.

Unfortunately cloud encryption is generally messy for SaaS, but there are decent options to integrate encryption into PaaS, and excellent ones for IaaS. The most common use cases are encrypting storage volumes associated with applications, encrypting application data, and encrypting data in object storage. Some cloud providers are even adding options for customers to manage their own encryption keys, while the provider encrypts and decrypts the data within the platform (we call this Bring Your Own Key).

For details see our paper on Defending Cloud Data with Infrastructure Encryption.


Compliance is a principal driver of encryption and tokenization sales. Some obligations, such as PCI, explicitly require it, while others provide a “safe harbor” provision in case encrypted data is lost. Typical policies cover IT administrators accessing data, users issuing ad hoc queries, retrieval of “too much” information, or examination of restricted data elements such as credit card numbers. So compliance controls typically focus on issues of privileged user entitlements (what users can access), segregation of duties (so admins cannot read sensitive data), and the security of data as it moves between application and database instances. These policies are typically enforced by the applications which process users requests, limiting access (decryption) according to policy. Policies can be as simple as allowing only certain users to see certain types of data. More complicated policies build in fraud deterrence, limit how many records specific users are allowed to see, and shut off access entirely in response to suspicious user behavior. In other use cases, where companies move sensitive data to third-party systems they do not control, data masking and tokenization have become popular choices for ensuring sensitive data does not leave the company at all.


The payments use case deserves special mention; although commonly viewed as an offshoot of compliance, it is more a backlash – an attempt to avoid compliance requirements altogether. Before data breaches it was routine to copy payment data (account numbers and credit card numbers) anywhere they could possibly be used, but now each copy carries the burden of security and oversight, which costs money. Lots of it. In most cases payment data was not required, but the usage patterns based around it became so entrenched that removal would break applications. For example merchants do not need to store – or even see – customer credit card numbers for payment, but many of their IT systems were designed around credit card numbers.

In the payment use case, the idea is to remove payment data wherever possible, and thus the threat of data breach, thus reducing audit responsibility and cost. Here tokenization, format-preserving encryption, and masking have come into their own: removing sensitive payment data, and along with it most need for security and compliance. Industry organizations like PCI and regulatory bodies have only recently embraced these technical approaches for compliance scope reduction, and more recent variants (including Apple Pay merchant tokens) also improve user data privacy.


Every company depends on applications to one degree or another, and these applications process data critical to the business. Most applications, be they ‘web’ or ‘enterprise’, leverage encryption. Encryption capabilities may be embedded in the application or bundled with the underlying file system, storage array, or relational database system.

Application encryption is selected when fine-grained control is needed, to encrypt select data elements, and to only decrypt information as appropriate for the application – not merely because recognized credentials were provided. This granularity of control comes at a price – it is more difficult to implement, and changes in usage policies may require application code changes, followed by extensive validation and testing.

The operational costs can be steep, but this level of security is essential for some applications – particularly financial and payment applications. For other types of applications, simply protecting data “at rest” (typically in files or databases) with transparent encryption at the file or database layer is generally sufficient.


Wednesday, February 18, 2015

Cracking the Confusion: Additional Platform Features and Options

By Rich

This is the fifth post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post and find the other posts under “related posts” in full article view.

Additional Platform Features and Options

The encryption engine and the key store are the major functional pieces in any encryption platform, but there are supporting systems with any data center encryption solution that are important for both overall management, as well as tailoring the solution to fit within your application infrastructure. We frequently see the following major features and options to help support customer needs:

Central Management

For enterprise-class data center encryption you need a central location to define both what data to secure and key management policies. So management tools provide a window onto what data is encrypted and a place to set usage policies for cryptographic keys. You can think of this as governance of the entire crypto ecosystem – including key rotation policies, integration with identity management, and IT administrator authorization. Some products even provide the ability to manage remote cryptographic engines and automatically apply encryption as data is discovered. Management interfaces have evolved to enable both security and IT management to set policy without needing cryptographic expertise. The larger and more complex your environment, the more critical central management becomes, to control your environment without making it a full-time job.

Format Preserving Encryption

Encryption protects data by scrambling it into an unreadable state. Format Preserving Encryption (FPE) also scrambles data into an unreadable state, but retains the format of the original data. For example if you use FPE to encrypt a 9-digit Social Security Number, the encrypted result would be 9 digits as well. All commercially available FPE tools use variations of AES encryption, which remains nearly impossible to break, so the original data cannot be recovered without the key. The principal reason to use FPE is to avoid re-coding applications and re-structuring databases to accommodate encrypted (binary) data. Both tokenization and FPE offer this advantage. But encryption obfuscates sensitive information, while tokenization removes it entirely to another location. Should you need to propagate copies of sensitive data while still controlling occasional access, FPE is a good option. Keep in mind that FPE is still encryption, so sensitive data is still present.


Tokenization is a method of replacing sensitive data with non-sensitive placeholders: tokens. Tokens are created to look exactly like the values they replace, retaining both format and data type. Tokens are typically ‘random’ values that look like the original data but lack intrinsic value. For example, a token that looks like a credit card number cannot be used as a credit card to submit financial transactions. Its only value is as a reference to the original value stored in the token server that created and issued the token. Tokens are usually swapped in for sensitive data stored in relational databases and files, allowing applications to continue to function without changes, while removing the risk of a data breach. Tokens may even include elements of the original value to facilitate processing. Tokens may be created from ‘codebooks’ or one time pads; these tokens are still random but retain a mathematical relationship to the original, blurring the line between random numbers and FPE. Tokenization has become a very popular, and effective, means of reducing the exposure of sensitive data.


Like tokenization, masking replaces sensitive data with similar non-sensitive values. And like tokenization masking produces data that looks and acts like the original data, but which doesn’t pose a risk of exposure. But masking solutions go one step further, protecting sensitive data elements while maintaining the value of the aggregate data set. For example we might replace real user names in a file with names randomly selected from a phone directory, skew a person’s date of birth by some number of days, or randomly shuffle employee salaries between employees in a database column. This means reports and analytics can continue to run and produce meaningful results, while the database as a whole is protected. Masking platforms commonly take a copy of production data, mask it, and then move the copy to another server. This is called static masking or “Extract, Transform, Load” (ETL for short).

A recent variation is called “dynamic masking”: masks are applied in real time, as data is read from a database or file. With dynamic masking the original files and databases remain untouched; only delivered results are changed, on-the-fly. For example, depending on the requestor’s credentials, a request might return the original (real, sensitive) data, or a masked copy. In the latter case data is dynamically replaced with a non-sensitive surrogate. Most dynamic masking platforms function as a ‘proxy’ something like firewall, using redaction to quickly return information without exposing sensitive data to unauthorized requesters. Select systems offer more intelligent randomization, tokenization, or even FPE.

Again, the lines between FPE, tokenization, and masking are blurring as new variants emerge. But tokenization and masking variants offer superior value when you don’t want sensitive data exposed but cannot risk application changes.