After over 25 years of the modern IT security industry, breaches still happen at an alarming rate. Yes, that’s fairly obvious but still disappointing, given the billions spent every year in efforts to remedy the situation. Over the past decade the mainstays of security controls have undergone the next generation treatment – initially firewalls and more recently endpoint security. New analytical techniques have been mustered to examine infrastructure logs in more sophisticated fashion.

But the industry seems to keep missing the point. The objective of nearly every hacking campaign is (still) to steal data. So why focus on better infrastructure security controls and better analytics of said infrastructure? Mostly because data security is hard. The harder the task, the less likely overwhelmed organizations will have the fortitude to make necessary changes.

To be clear, we totally understand the need to live to fight another day. That’s the security person’s ethos, as it must be. There are devices to clean up, incidents to respond to, reports to write, and new architectures to figure out. The idea of tackling something nebulous like data security, with no obvious solution, can remain a bridge too far.

Or is it? The time has come to revisit data security, and to utilize many of the new techniques pioneered for infrastructure to address the insider threat where it appears: attacking data. So our new series, Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics, will introduce some new practices and highlight new approaches to protecting data.

Before we get started, let’s send a shout-out to Box for agreeing to license this content when we finish up this series. Without clients like Box, who understand the need for forward-looking research to tell you where things are going, not reports telling you where they’ve been, we wouldn’t be able to produce research like this.

Understanding Insider Risk

While security professionals like to throw around the term “insider threat”, it’s often nebulously defined. In reality it includes multiple categories, including external threats which leverage insider access. We believe to truly address a risk you first need to understand it (call us crazy). To break down the first level of the insider threat, let’s consider its typical risk categories:

  1. Accidental Misuse: In this scenario the insider doesn’t do anything malicious, but makes a mistake which results in data loss. For example a customer service rep could respond to an email sent by a customer which includes private account info. It’s not like the rep is trying to violate policy, but they didn’t take the time to look at the message and clear out any private data.
  2. Tricked into Unwanted Actions: Employees are human, and can be duped into doing the wrong thing. Phishing is a great example. Or providing access to a folder based on a call from someone impersonating an employee. Again, this isn’t malicious, but it can still cause a breach.
  3. Malicious Misuse: Sometimes you need to deal with the reality of a malicious insider intentionally stealing data. In the first two categories the person isn’t trying to mask their behavior. In this scenario they are deliberately obfuscating, which that means you need different tactics to detect and prevent the activity.
  4. Account Takeover: This category reflects the fact that once an external adversary has presence on a device, they become an ‘insider’; with a compromised device and account, they have access to critical data.

We need to consider these categories in the context of adversaries so you can properly align your security architecture. So who are the main adversaries trying to access your stuff? Some coarse-grained categories follows: unsophisticated (using widely available tools), organized crime, competitors, state-sponsored, and finally actual insiders. Once you have figured out your most likely adversary and their typical tactics, you can design a set of controls to effectively protect your data.

For example an organized crime faction looks to access data related to banking or personal information for identity theft. But a competitor is more likely looking for product plans or pricing strategies. You can (and should) design your data protection strategy with these likely adversaries in mind, to help prioritize what to protect and how.

Now that you understand your adversaries and can infer their primary tactics, you have a better understanding of their mission. Then you can select a data security architecture to minimize risk, and optimally prevent any data loss. But that requires us to use different tactics than would normally be considered data security.

A New Way to Look at Data Security

If you surveyed security professionals and asked what data security means to them, they’d likely say either encryption or Data Loss Prevention (DLP). When all you have is a hammer, everything looks like a nail, and for a long time those two have been the hammers available to us. Of course the fact that we want to expand our perspective a bit doesn’t mean DLP and encryption no longer have any roles to play in data protection. Of course they do. But we can supplement them with some new tactics.

  • Data Guardrails: We have defined Guardrails as a means to enforce best practices without slowing down or impacting typical operations. Typically used within the context of cloud security (like, er, DisruptOps), a data guardrail enables data to be used in certain ways while blocking unauthorized usage. To bust out an old network security term, you can think of guardrails as like “default-deny” for data. You define the set of acceptable practices, and don’t allow anything else.
  • Data Behavioral Analytics: Many of you have heard of UBA (User Behavioral Analytics), where all user activity is profiled, and you then look for anomalous activities which could indicate one of the insider risk categories above. What if you turned UBA inside-out and focused on the data? Using similar analytics you could profile the usage of all the data in your environment, and then look for abnormal patterns which warrant investigation. We’ll call this DataBA because your database administrators might be a little peeved if we horned in on their job title.

Our next post will dig farther into these new concepts of Data Guardrails and DataBA, to illuminate both the approaches and their pitfalls.

Share: