Login  |  Register  |  Contact
Monday, February 16, 2015

Firestarter: Cyber!!!

By Rich

Last week President Obama held a cybersecurity summit out in the Bay Area. He issued a new executive order and is standing up a new threat sharing center. This is in response to ongoing massive attacks such as the Anthem breach and (as we heard this weekend) hundreds of millions stolen in bank thefts. But what does it all mean to security pros and the industry? The truth is, not much in our day-to-day (yet), but you certainly had better pay attention.

Watch or listen:


—Rich

Friday, February 13, 2015

Friday Summary: February 13, 2015

By Adrian Lane

Welcome to the Friday the 13th edition of the Friday Summary! It has been a while since I wrote the summary so there is lots to cover …

My favorite external post this week is a research paper, Mongo Databases At Risk, outlining a very common trend among MongoDB users: not using basic user authentication to secure their databases. Well, that, and putting them on the Internet. On the default port. Does this sound like SQL Server circa 2003 to anyone else?

One angle I found important was the number of instances discovered: nearly 40k databases. That is a freakin’ lot! Remember, this is MongoDB. And just those running on the Internet at the default port. Yes, it’s one of the top NoSQL platforms, but during our inquiries we spoke with 4 Hadoop users for every MongoDB user. MongoDB was also behind Hadoop and Cassandra. I don’t know if anyone publishes download or usage numbers for the various platforms, but extrapolating from those numbers, there are a lot of NoSQL databases in use. Someone with more time on their hands might decide to scan the Internet for instances of the other platforms (the default port for Hadoop, Cassandra, CouchDB, and Redis is 6380; RIAK is 8087). I would love to know what you find.

Back to security… I have had conversations with several firms trying to figure out how to monitor NoSQL usage; we know how to apply DAM principles to SQL, but MapReduce and other types of queries are much more difficult to parse. I expect several vendors to introduce basic monitoring for Hadoop in the next year, but it will take time to mature, and even more to cover other platforms. What I haven’t heard discussed is the easier – and no less pressing – issue of configuration and vulnerability assessment. The enterprise distributions are providing best practices but automated scans are scarce – and usually custom. That is a free hint for any security vendors looking for a quick way to help big data customers get secure.


Mobile security consumes much more of my time than it should. I geek out on it, often engaging Gunnar in conversation on everything from the inner workings of secure elements to the apps that make payments happen. And I read everything I can find. This week I ran across Why Banks Will Win the Battle for the Mobile Wallet, by John Gunn – the guy who runs the wonderfully helpful twitter feed @AuthNews. But on this I think he has missed the point. Banks are not battling to win mobile wallets. In fact those I have spoken with don’t care about wallets. They care about transactions. And moving more transactions from cash to credit means a growing stream of revenue for merchant banks and payment processors, which makes them very happy.

Wallets in and of themselves don’t fosters adoption – as Google is well aware – and in fact many users don’t really trust wallets at all. What gets people to move from a plastic card or cash, at least in the US, is a combination of convenience and trust. Starbucks leveraged their brand affinity into seven million subscribers for their app and an impressive 2.1 million transactions per week. Banks benefit directly when more transactions move away from cash, and they are happy to let others own the user experience.

But things get really interesting in overseas markets, which make US adoption of mobile payments look like a payments backwater. Nations without traditional banking or payment infrastructure can now move money in ways they previously could not, so adoption rates have been huge. Leveraging cellular infrastructure makes it faster and safer to move money, with fewer worries about carrying cash. Nations like Kenya – which is not often considered on the cutting edge of technology, but had 25 million mobile payment users and moved $26 billion in 2014 via mobile payments and mobile money subscriptions. Sometimes technology really does make the world a better place. The banks don’t care which wallets, apps, technology, or carriers wins – they just want someone to make progress.


In January I normally publish my research calendar for the coming year. But Rich has been hogging the Friday Summary for weeks now, so I finally get a chance to talk about what I am seeing and doing.

  1. Tokenization: I am – finally – going to publish some thoughts on the latest trends in tokenization. I want to talk about changes in the technology, adoption on mobile platforms, how the latest PCI specification is changing compliance, and some customer user cases.
  2. Risk-Based Authentication and Authorization: We see many more organizations looking at risk-based approaches to augment the security of web-based applications. Rather than rewrite applications they use metadata, behavioral information, business context, and… wait for it… big data analytics to better determine the acceptability of a request. And it is often cheaper and easier to bolt this on externally than to change applications. Gunnar and I have wanted to write this paper for a year, and now we finally have the time.
  3. Building a Security Analytics Platform: I have been briefed by many of security analytics startups, and each is putting together some basic security analysis capabilities, usually built on big data databases. I have, in that same period, also spoken with many large enterprises who decided not to wait for the industry to innovate, and are building their own in-house systems. The last couple even asked me what I thought of certain architectural choices, and which data elements should they use as hash keys! So there is considerable demand for consumer education; I will cover off-the-shelf and DIY options.

I am still on the fence about some secure code development ideas, so if you have an idea, let’s talk. Even the security vendors I have visited in the last year have pulled me aside to ask about transitioning to Agile, or how to fix a failed transition to Agile. Most want to know what this whole DevOps thing is about. I have got a few ideas, and there is broad interest from end users and software vendors alike, so this is on the docket but not yet fully defined. Let me know if you have ideas… on any of these.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Mortman: Let’s Discuss Zero-Knowledge Data. Craptacular!
  • Adrian Lane: MongoDB database at Risk (PDF). Nice writeup by Jens Heyens, Kai Greshake, and Eric Petryka on misconfigured MongoDB databases sitting on default ports with mongo shell set not to require user credentials. They provide one workaround but you can also enable access controls, change the port temporarily, or disable external Internet access. Another interesting note: they found ~40k instances of MongoDB – not counting Hadoop or Cassandra. Who said big data was a fad?
  • Mike Rothman: Find Improvements That Lie Clearly At Hand. Our own GP argues that it’s better to find quick, dirty, and cheap ways to improve security than to try for perfection. A perfect sentiment. Not too many fields need to embrace such abstract concepts as infosec. Software is abstract to begin with; layer humans’ difficulty grasping risk on top of that, and information security has to climb two mountains. Believe it or not, infosec people can learn some things from developers. For better and worse, Agile projects ship code. Developers have clearly embraced Thomas Carlyle: “Our main business is not to see what lies dimly at a distance, but to…”

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Kamal Govindaswamy, in response to Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped.

Rich,

Nice article. Thank you!

A question on your statement around DAM and 2FA not being effective as well. I am curious as to your thoughts on how they could be ineffective against a persistent actor. I can think of a scenario or two but am interested in your thoughts, wherher/how they would be bypassed, compromised etc.

Thanks!

—Adrian Lane

Thursday, February 12, 2015

Cracking the Confusion: Encryption Layers

By Adrian Lane, Rich

Picture enterprise applications as a layer cake: applications sit on databases, databases on files, and files are mapped onto storage volumes. You can use encryption at each of these layers in your application stack: within the application, in the database, on files, or on storage volumes. Where you use an encryption engine dominates security and performance. Higher up the stack can offer more security, with higher complexity and performance cost.

There is a similar tradeoff with encryption engine and key manager deployments: more tightly coupled systems offer less complexity, but less security and reliability. Building an encryption system requires a balance between security, complexity, and performance. Let’s take a closer look at each layer and their tradeoffs.

Application Encryption

One of the more secure ways to encrypt application data is to collect it in the application, send it to an encryption server or appliance (an encryption library embedded in the application), and then store the encrypted data in a separate database. The application has full control over who sees what and can secure data without depending on the security of the underlying database, file system, or storage volumes. The keys themselves might be on the encryption server or could even be stored in yet another system. The separate key store increases security, simplifies management of multiple encryption appliances, and helps keep keys safe for data movement – backup, restore, and migration/synchronization to other data centers.

Database Encryption

Relational database management systems (RDBMS) typically have two encryption options: transparent and column. In our layer cake above columnar encryption occurs as applications insert data into a database, whereas transparent encryption occurs as the database writes data out. Transparent encryption is applied automatically to data before it is stored at the file or disk layer. In this model encryption and key management happen behind the scenes, without the user’s knowledge or requiring application programming. The database management system handles encryption and decryption operations as data is read (or written), ensuring all data is secured, and offering very good performance. When you need finer control over data access, you can encrypt single columns, or tables, within the database. This approach offers the advantage that only authenticated users of encrypted data are able to gain access, but requires changing database or application code to manage encryption operations. With either approach there is less burden on application developers to build a crypto system, but slightly less control over who can access sensitive data.

Some third-party tools also offer transparent database encryption by automatically encrypting data as it is stored in files. These tools aren’t part of the database management system itself, so they can work with databases that don’t support TDE directly, and provide greater separation of duties for database administrators.

File Encryption

Some applications, such as payment systems and web applications, do not use databases and instead store sensitive data in files. Encryption is applied transparently as data is written to files. This type of encryption is offered as a third-party add-on to the file system, or embedded within the operating system. Encryption and decryption are transparent to both users and applications. Data is decrypted when a user requests a file, after they have authenticated to the system. If the user does not have permission to read the file, or has not provided proper credentials, they only get encrypted data. File encryption is commonly used to protect “data at rest” in applications that do not include encryption capabilities, including legacy enterprise applications and many big data platforms.

Disk/Volume Encryption

Many off-the-shelf disk drives and Storage Area Network (SAN) arrays include automatic data encryption. Encryption is applied as data is written to disk, and decrypted by authenticated users/apps when requested. Most enterprise-class systems hold encryption keys locally to support encryption operations, but rely on external key management services to manage keys and provide advanced key services such as key rotation. Volume encryption protects data in case drives are physically stolen. Authenticated users and applications are provided unencrypted copies of files and data.

Tradeoffs

In general, the further “up the stack” you deploy encryption, the more secure your data is. The price of that extra security is more difficult integration, usually in the form o application code changes. Ideally we would encrypt all data at the application layer and fully leverage user authentication, authorization, and business context to determine who can see sensitive data. In the real world the code changes required for this level of precision control are often insurmountable engineering challenges and/or cost prohibitive. Surprisingly, transparent encryption often perform faster than application-layer encryption, even with larger data sets. The tradeoff is moving high enough “up the stack” to address relevant threats while minimizing the pain of integration and management. Later in this series we will walk you through the selection process in detail.

Next up in this series: key management options.

Adrian Lane, Rich

Wednesday, February 11, 2015

Cracking the Confusion: Building an Encryption System

By Rich, Adrian Lane

This is the second post in a new series. If you want to track it through the entire editing process, you can follow along and contribute on GitHub. You can read the first post here

Building an Encryption System

In a straightforward application we normally break out the components – such as the encryption engine in an application server, the data in a database, and key management in an external service or appliance.

Or, for a legacy application, we might instead enable Transparent Database Encryption (TDE) for the database, with the encryption engine and data both on the same server, but key management elsewhere.

All data encryption systems are defined by where these pieces are located – which, even assuming everything works perfectly, define the protection level of the data. We will go into the different layers of data encryption in the next section, but at a high level they are:

  • In the application where you collect the data.
  • In the database that holds the data.
  • In the files where the data is stored.
  • On the storage volume (typically a hard drive, tape, or virtual storage) where the files reside.

All data flows through that stack (sometimes skipping applications and databases for unstructured data). Encrypt at the top and the data is protected all the way down, but this adds complexity to the system and isn’t always possible. Once we start digging into the specifics of different encryption options you will see that defining your requirements almost always naturally leads you to select a particular layer, which then determines where to place the components.

The Three Laws of Data Encryption

Years ago we developed the Three Laws of Data Encryption as a tool to help guide the encryption decisions listed above. When push comes to shove, there are only three reasons to encrypt data:

  1. If the data moves, physically or virtually.
  2. To enforce separation of duties beyond what is possible with access controls. Usually this only means protecting against administrators because access controls can stop everyone else.
  3. Because someone says you have to. We call this “mandated encryption”.

Here is an example of how to use the rules. Let’s say someone tells you to “encrypt all the credit card numbers” in a particular application. Let’s further say the reason is to prevent loss of data if a database administrator account is compromised, which eliminates our third reason.

The data isn’t necessarily moving, but we want separation of duties to protect the database even if someone steals administrator credentials. Encrypting at the storage volume layer wouldn’t help because a compromised administrative account still has access within the database. Encrypting the database files alone wouldn’t help either.

Encrypting within the database is an option, depending on where the keys are stored (they must be outside the database) and some other details we will get to later. Encrypting in the application definitely helps, since that’s completely outside the database. But in either cases you still need to know when and where an administrator could potentially access decrypted data.

That’s how it all ties together. Know why you are encrypting, then where you can potentially encrypt, then how to position the encryption components to achieve your security objectives.

Tokenization and Data Masking

Two alternatives to encryption are sometimes offered in commercial encryption tools: tokenization and data masking. We will spend more time on them later, but simply define them for now:

  • Tokenization replaces a sensitive piece of data with a random piece of data that can fit the same format (such as by looking like a credit card number without actually being a valid credit card number). The sensitive data and token are then stored together in a highly secure database for retrieval under limited conditions.
  • Data masking replaces sensitive data with random data, but the two aren’t stored together for later retrieval. Masking can be a one-way operation, such as generating a test database, or a repeatable operation such as dynamically masking a specific field for an application user based on permissions.

For more information on tokenization vs. encryption you can read our paper.

That covers the basics of encryption systems. Our next section will go into details of the encryption layers above before delving into key management, platform features, use cases, and the decision tree to pick the right option.

Rich, Adrian Lane

Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications

By Rich, Adrian Lane

This is the first post in a new series. If you want to track it through the entire editing process, you can follow it and contribute on GitHub.

The New Age of Encryption

Data encryption has long been part of the information security arsenal. From passwords, to files, to databases, we rely on encryption to protect our data in storage and on the move. It’s a foundational element in any security professional’s education. But despite its long history and deep value, adoption inside data centers and applications has been relatively – even surprisingly – low.

Today we see encryption growing in the data center at an accelerating rate, due to a confluence of reasons. A trite way to describe it is “compliance, cloud, and covert affairs”. Organizations need to keep auditors off their backs; keep control over data in the cloud; and stop the flood of data breaches, state-sponsored espionage, and government snooping (even their own).

And thanks to increasing demand, there is a growing range of options, as vendors and even free and Open Source tools address the opportunity. We have never had more choice, but with choices comes complexity; and outside of your friendly local sales representative, guidance can be hard to come by.

For example, given a single application collecting an account number from each customer, you could encrypt it in any of several different places: the application, the database, or storage – or use tokenization instead. The data is encrypted (or substituted), but each place you might encrypt raises different concerns. What threats are you protecting against? What is the performance overhead? How are keys managed? Does it meet compliance requirements?

This paper cuts through the confusion to help you pick the best encryption options for your projects. In case you couldn’t guess from the title, our focus is on encrypting in the data center – applications, servers, databases, and storage. Heck, we will even cover cloud computing (IaaS: Infrastructure as a Service), although we covered that in depth in another paper. We will also cover tokenization and its relationship with encryption.

We won’t cover encryption algorithms, cipher modes, or product comparisons. We will cover different high-level options and technologies, such as when to encrypt in the database vs. in the application, and what kinds of data are best suited for tokenization. We will also cover key management, some essential platform features, and how to tie it all together.

Understanding Encryption Systems

When most security professionals first learn about encryption the focus is on keys, algorithms, and modes. We learn the difference between symmetric and asymmetric and spend a lot of time talking about Bob and Alice.

Once you start working in the real world your focus needs to change. The fundamentals are still important but now you need to put them into practice as you implement encryption systems – the combination of technologies that actually protects data. Even the strongest crypto algorithm is worthless if the system around it is full of flaws.

Before we go into specific scenarios let’s review the basic concepts behind building encryption systems because this becomes the basis for decisions on which encryption options to go use.

The Three Components of a Data Encryption System

When encrypting data, especially in applications and data centers, knowing how and where to place these pieces is incredibly important, and mistakes here are one of the most common causes of failure. We use all our data at some point, and understanding where the exposure points are, where the encryption components reside, and how they tie together, all determine how much actual security you end up with.

Three major components define the overall structure of an encryption system.

  • The data: The object or objects to encrypt. It might seem silly to break this out, but the security and complexity of the system depend on the nature of the payload, as well as where it is located or collected.
  • The encryption engine: This component handles actual encryption (and decryption) operations.
  • The key manager: This handles keys and passes them to the encryption engine.

In a basic encryption system all three components are likely located on the same system. As an example take personal full disk encryption (the built-in tools you might use on your home Windows PC or Mac): the encryption key, data, and engine are all stored and used on the same hardware. Lose that hardware and you lose the key and data – and the engine, but that isn’t normally relevant. (Neither is the key, usually, because it is protected with another key, or passphrase, that is not stored on the system – but if the system is lost while running, with the key in memory, that becomes a problem). For data centers these major components are likely to reside on different systems, increasing complexity and security concerns over how the pieces work together.

Rich, Adrian Lane

Monday, February 09, 2015

Firestarter: It’s Not My Fault!

By Rich

Rich, Mike, and Adrian each pick a trend they expect to hammer us in 2015. Then they talk about it, probably too much. From threat intel to tokenization to SaaS security.

And oh, we did have to start with a dig at the Pats. Cheating? Super Bowl? Really? Come on now.

Watch or listen:


—Rich

Sunday, February 08, 2015

Applied Threat Intelligence: Building a TI Program

By Mike Rothman

As we wrap up our Applied Threat Intelligence series, we have already defined TI and worked our way through a number of the key use cases (security monitoring, incident response, and preventative controls) where TI can help improve your security program, processes, and posture. The last piece of the puzzle is building a repeatable process to collect, aggregate, and analyze the threat intelligence. This should include a number of different information sources, as well as various internal and external data analyses to provide context to clarify what the intel means to you.

As with pretty much everything in security, handing TI is not “set and forget”. You need to build repeatable process to select data providers and continually reassess the value of those investments. You will need to focus on integration; as we described, data isn’t helpful if you can’t use it in practice. And your degree of comfort in automating processes based on threat intelligence will impact day-to-day operational responsibilities.

First you need to decide where threat intelligence function will fit organizationally. Larger organizations tend to formalize an intelligence group, while smaller entities need to add intelligence gathering and analysis to the task lists of existing staff. Out of all the things that could land on a security professional, an intelligence research responsibility isn’t bad. It provides exposure to cutting-edge attacks and makes a difference in your defenses, so that’s how you should sell it to overworked staffers who don’t want yet another thing on their to-do lists.

But every long journey begins with the first step, so let’s turn our focus to collecting intel.

Gather Intelligence

Early in the intelligence gathering process you focused your efforts with an analysis of your adversaries. Who they are, what they are most likely to try to achieve, and what kinds of tactics they use to achieve their missions – you need to tackle all these questions. With those answers you can focus on intelligence sources that best address your probable adversaries. Then identify the kinds of data you need. This is where the previous three posts come in handy. Depending on which use cases you are trying to address you will know whether to focus on malware indicators, compromised devices, IP reputation, command and control indicators, or something else.

Then start shopping. Some folks love to shop, others not so much. But it’s a necessary evil; fortunately, given the threat intelligence market’s recent growth, you have plenty of options. Let’s break down a few categories of intel providers, with their particular value:

  • Commercial: These providers employ research teams to perform proprietary research, and tend to attain highly visibility by merchandising findings with fancy exploit names and logos, spy thriller stories of how adversary groups compromise organizations and steal data, and shiny maps of global attacks. They tend to offer particular strength regarding specific adversary classes. Look for solid references from your industry peers.
  • OSINT: Open Source Intelligence (OSINT) providers specialize in mining the huge numbers of information security sources available on the Internet. Their approach is all about categorization and leverage because there is plenty of information available free. These folks know where to find it and how to categorize it. They normalize the data and provide it through a feed or portal to make it useful for your organization. As with commercial sources, the question is how valuable any particular source is to you. You already have too much data – you only need providers who can help you wade through it.
  • ISAC: There are many Information Sharing and Analysis Centers (ISAC), typically built for specific industries, to communicate current attacks and other relevant threat data among peers. As with OSINT, quality can be an issue, but this data tends to be industry specific so its relevance is pretty well assured. Participating in an ISAC obligates you to contribute data back to the collective, which we think is awesome. The system works much better when organizations both contribute and consume intelligence, but we understand there are cultural considerations. So you will need to make sure senior management is okay with it before committing to an ISAC.

Another aspect of choosing intelligence providers is figuring out whether you are looking for generic or company-specific information. OSINT providers are more generic, while commercial offerings can go deeper. Though various ‘Cadillac’ offerings include analysts dedicated specifically to your organization – proactively searching grey markets, carder forums, botnets, and other places for intelligence relevant to you.

Managing Overlap

With disparate data sources it is a challenge to ensure you don’t waste time on multiple instances of the same alert. One key to determining overlap is an understanding of how the intelligence vendor gets their data. Do they use honeypots? Do they mine DNS traffic and track new domain registrations? Have they built a cloud-based malware analysis/sandboxing capability? You can categorize vendors by their tactics to help you pick the best for your requirements.

To choose between vendors you need to compare their services for comprehensiveness, timeliness, and accuracy. Sign up for trials of a number of services and monitor their feeds for a week or so. Does one provider consistently identify new threats earlier? Is their information correct? Do they provide more detailed and actionable analysis? How easy will it be to integrate their data into your environment for your use cases.

Don’t fall for marketing hyperbole about proprietary algorithms, Big Data analysis, or staff linguists penetrating hacker dens and other stories straight out of a spy novel. It all comes down to data, and how useful it is to your security program. Buyer beware, and make sure you put each intelligence provider through its paces before you commit.

Our last point to stress is the importance of short agreements, especially up front. You cannot know how these services will work for you until you actually start using them. Many of these intelligence companies are startups, and might not be around in 3 or 4 years. Once you identify a set of core intelligence feeds, longer deals can be cut, but we recommend against doing that before your TI process matures and your intelligence vendor establishes a track record addressing your needs.

To Platform or Not to Platform

Now that you have chosen intelligence feeds, what will you do with the data? Do you need a stand-alone platform to aggregate all your data? Will you need to stand up yet another system in your environment, or can you leverage something in the cloud? Will you actually use your intelligence providers’ shiny portals? Or do you expect alerts to show up in your existing monitoring platforms, or be sent via email or SMS?

There are many questions to answer as part of operationalizing your TI process. First you need to figure out whether you already have a platform in place. Existing security providers (specifically SIEM and network security vendors) now offer threat intelligence ‘supermarkets’ to enable you to easily buy and integrate data into their monitoring and control environments. Even if your vendors don’t offer a way to easily buy TI, many support standards such as STIX and TAXXI to facilitate integration.

We are focusing on applied threat intelligence, so the decision hinges on how you will use the threat intelligence. If you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligene platform or ‘clearinghouse’ to manage TI feeds.

Selecting the Platform

If you decide to look at stand-alone threat intelligence platforms there are a couple key selection criteria to consider:

  1. Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Make sure you can use the data that’s important to you, and will not be restricted by your platform.
  2. Scalable: You will use a lot of data in your threat intelligence process, so scalability is important. But computational scalability is likely more important – you will be intensively search and mine aggregated data so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept is a close match to your production environment.
  3. Search: Threat intelligence (like the rest of security) doesn’t lend itself to absolute answers. So make TI the start of your process of figuring out what happened in your environment, and leverage the data for your particular use cases as we described earlier. One clear requirement, for all use cases, is search. So make sure your platform makes it easy to search all your TI data sources.
  4. Urgency Scoring: Applied Threat Intelligence is all about betting on which attackers, attacks, and assets are the most important to worry about, so you will find considerable value in a flexible scoring mechanism. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate an urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program.

Determining Relevance in the Heat of Battle

So how can you actually use the threat intelligence you painstakingly collected and aggregated? Relevance to your organization depends on the specifics of the threat, and whether it can be used against you. Focus on real potential exploits – a vulnerability which does not exist in your environment is not a real concern. For example you probably don’t need to worry about being financial malware if you don’t hold or have access to credit card data. That doesn’t mean you shouldn’t pay any attention to these attacks – many exploits leverage a variety of interesting tactics, which might become a part of a relevant attack in the future. Relevance encompasses two aspects:

  1. Attack surface: Are you vulnerable to the specific attack vector? Weaponized Windows 2000 exploits aren’t relevant if you don’t have any Windows 2000 systems. Once you have patched all instances of a specific vulnerability on your devices you get a respite from worrying about the exploit. Your asset base and internally collected vulnerability information provide this essential context.
  2. Intelligence Reliability: You need to keep re-evaluating each threat intelligence feed to determine its usefulness. A feed which triggers many false positives is less relevant. On the other hand, if a feed usually nails a certain type of attack, you should those warnings particularly seriously. Note that attack surface may not be restricted to your own assets and environment. Service providers, business partners, and even customers represent indirect risks – if one of them is compromised, an attacker might have a direct path to you.

Constantly Evaluating Intelligence

How can you determine the reliability of a TI source? Threat data ages very quickly and TI sources such as IP reputation can change hourly. Any system you use to aggregate threat intelligence should be able to report on the number of alerts generated from each TI source, without hurting your brain building reports. These reports show value from your TI investment – it is a quick win if you can show how TI identified an attack earlier than you would have detected it otherwise. Additionally, if you use multiple TI vendors, these reports enable you to compare them based on actual results.

Marketing Success Internally

Over time, as with any security discipline, you will refine your verification/validation/investigation process. Focus on what worked and what didn’t, and tune your process accordingly. It can be bumpy when you start really using TI sources – typically you start by receiving a large number of alerts, and following them down a bunch of dead ends. It might remind you, not so fondly, of the SIEM tuning process. But security is widely regarded as overhead, so you need a Quick Win with any new security technology.

TI will find stuff you didn’t know about and help you get ahead of attacks you haven’t seen yet. But that success story won’t tell itself, so when the process succeeds – likely early on – you will need to publicize it early and often. A good place to start is discovery of an attack in progress. You can show how you successfully detected and remediated the attack thanks to threat intelligence. This illustrates that you will be compromised (which must be constantly reinforced to senior management), so success is a matter of containing damage and preventing data loss. The value of TI in this context is in shortening the window between exploit and detection.

You can also explain how threat intelligence helped you evolve security tactics based on what is happening to other organizations. For instance, if you see what looks like a denial of service (DoS) attack on a set of web servers, but already know from your intelligence efforts that DoS is a frequent decoy to obscure exfiltration activities, you have better context to be more sensitive to exfiltration attempts. Finally, to whatever degree you quantify the time you spend remediating issues and cleaning up compromises, you can show how much you saved by using threat intelligence to refine efforts and prioritize activities.

As we have discussed through this series, threat intelligence can even the battle between attackers and defenders, to a degree. But to accomplish this you must be able to gather relevant TI and leverage it in your processes. We have shown how to use TI both a) at the front end of your security process (in preventative controls) to disrupt attacks, and b) to more effectively monitor and investigate attacks – both in progress and afterwards. We don’t want to portray any of this as ‘easy’, but nothing worthwhile in security is easy. It is about constantly improving your processes to favorably impact your security posture, on an ongoing and continuous basis.

—Mike Rothman

Friday, February 06, 2015

Even if Anthem Had Encrypted, It Probably Wouldn’t Have Helped

By Rich

Earlier today in the Friday Summary I vented frustrations at news articles blaming the victims of crimes, and often guessing at the facts. Having been on the inside of major incidents that made the international news (more physical than digital in my case), I know how little often leaks to the outside world.

I picked on the Wired article because it seemed obsessed with the lack of encryption on Anthem data, without citing any knowledge or sources. Just as we shouldn’t blindly trust our government, we shouldn’t blindly trust reporters who won’t even say, “an anonymous source claims”. But even a broken clock is right twice a day, and the Wall Street Journal does cite an insider who says the database wasn’t encrypted (link to The Verge because the WSJ article is subscription-only).

I won’t even try too address all the issues involved in encrypting a database. If you want to dig in we wrote a (pretty good) paper on it a few years ago. Also, I’m very familiar with the healthcare industry, where encryption is the exception more than the rule. Many of their systems simply can’t handle it due to vendors not supporting it. There are ways around that but they aren’t easy.

So let’s look at the two database encryption options most likely for a system like this:

  1. Column (field) level encryption.
  2. Transparent Database Encryption (TDE).

Field-level encryption is complex and hard, especially in large databases, unless your applications were designed for it from the start. In the work I do with SaaS providers I almost always recommend it, but implementation isn’t necessarily easy even on new systems. Retrofitting it usually isn’t possible, which is why people look at things like Format Preserving Encryption or tokenization. Neither of which is a slam dunk to retrofit.

TDE is much cleaner, and even if your database doesn’t support it, there are third party options that won’t break your systems.

But would either have helped? Probably not in the slightest, based on a memo obtained by Steve Ragan at CSO Online.

The attacker had proficient understanding of the data platforms and successfully utilized valid database administrator logon information

They discovered a weird query siphoning off data, using valid credentials. Now I can tell you how to defend against that. We have written multiple papers on it, and it uses a combination of controls and techniques, but it certainly isn’t easy. It also breaks many common operational processes, and may not even be possible depending on system requirements. In other words, I can always design a new system to make attacks like this extremely hard, but the cost to retrofit an existing system could be prohibitive.

Back to Anthem. Of the most common database encryption implementations, the odds are that neither would have even been much of a speed bump to an attack like this. Once you get the right admin credentials, it’s game over.

Now if you combined with multi factor authentication and Database Activity Monitoring, that would have likely helped. But not necessarily against a persistent attacker with time to learn your systems and hijack legitimate credentials. Or perhaps encryption that limited access based on account and process, assuming your DBAs never need to run big direct queries.

There are no guarantees in security, and no silver bullets. Maybe encrypting the database would have helped, but probably not the way most people do it. But it sure makes a nice headline.

I am starting a new series on datacenter encryption and tokenization Monday, which will cover some of these issues. Not because of the breach – I am actually already 2 weeks late.

—Rich

Summary: Analyze, Don’t Guess

By Rich

Rich here,

Another week, another massive data breach.

This morning I woke up to a couple interview requests over this. I am always wary of speaking on incidents based on nothing more than press reports, so I try to make clear that all I can do is provide some analysis. Maybe I shouldn’t even do that, but I find I can often defuse hyperbole and inject context, even without speaking to the details of the incident.

That’s a fine line any of us on press lists walk. To be honest, more often than not I see people fall into the fail bucket by making assumptions or projecting their own bias.

Take this Anthem situation. I kept my comments along the lines of potential long-term issues for people now suffering exposed personal information (for example a year of credit monitoring is worthless when someone loses your Social Security Number). I was able to talk about who suffers the consequences of these breaches, trends in long-term impacts on breached companies, and the weaknesses in our financial and identity systems that make this data valuable.

I did all of that without blaming Anthem, guessing as to attribution, or discussing potential means and motivations. Those are paths you can consider if you have inside information (verified, of course), but even then you need to be cautious.

It was disappointing to read some of the articles on this breach. One in particular stood out because it was from a major tech publication, and the reporter seemed more interested in blaming Anthem and looking smarter than anything else. This is the same person who seriously blew it on another story recently due to the same hubris (but no apologies, of course).

There is a difference between analyzing and guessing, and it is often hubris.

Analysis means admitting what you don’t know, and challenging and doubting your own assumptions. Constantly.

I have a huge fracking ego, and I hate being wrong, but I care more about the truth and facts than being right or wrong. To me, it’s like science. Present the facts and the path to your conclusions, making any assumptions clear. Don’t present assumptions as facts, and always assume you don’t know everything and what you do know changes sometime. Most of the time.

And for crap’s sake, enough with blaming the victim and thinking you know how the breach occurred when you don’t have a single verified source (if you have one, put it in the article). Go read Dennis Fisher’s piece for how to play it straight and still make a point.

Unless you are Ranum. We all need to bow down to Ranum, who totally gets it.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

We know, slow week. We blame random acts of sleep deprivation.

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

—Rich

Wednesday, February 04, 2015

New Paper: Security and Privacy on the Encrypted Network

By Mike Rothman

Our Security and Privacy on the Encrypted Network paper tackles setting security policies to ensure that data doesn’t leak out over encrypted tunnels, and that employees adhere to corporate acceptable use policies, by decrypting traffic as needed. It also addresses key use cases and strategies for decrypting network traffic, including security monitoring and forensics, to ensure you can properly alert on security events and investigate incidents. We include guidance on how to handle human resources and compliance issues because an increasing fraction of network traffic is encrypted.

Check out this excerpt to get a feel for why you will encrypt and decrypt more on networks in the near future:

Trends (including cloud computing and mobility) mean organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials and ensures data moving outside the organization is protected from man-in-the-middle attacks. So we expect to see a much greater percentage of both internal and external network traffic to be encrypted over the next 2-3 years.

SPEN

We would like to thank Blue Coat for licensing the content in this paper. Without our licensees you’d be paying Big Research big money to get a fraction of the stuff we publish, free.

Check out the landing page for Security and Privacy on the Encrypted Network or download it directly (PDF).

—Mike Rothman

Incite 2/4/2015: 30x32

By Mike Rothman

It was a pretty typical day. I was settled into my seat at Starbucks writing something or other. Then I saw the AmEx notification pop up on my phone. $240.45, Ben Sherman, on the card I use for Securosis expenses. Huh? Who’s Ben Sherman? Pretty sure my bookie’s name isn’t Ben. So using my trusty Google fu I saw they are a highbrow mens clothier (nice stuff, BTW). But I didn’t buy anything from that store.

My well-worn, “Crap. My card number got pwned again.” process kicked in. Though I was far ahead of the game this time. I found the support number for Ben Sherman and left a message with the magic words, “blah blah blah fraudulent transaction blah blah,” and amazingly, I got a call back within 10 minutes. They kindly canceled the order (which saved them money) and gave me some details on the transaction.

AmEx on my phone

The merchandise was evidently ordered by a “Scott Rothman,” and it was to be shipped to my address. That’s why the transaction didn’t trigger any fraud alerts – the name was close enough and the billing and shipping addresses were legit. So was I getting punked? Then I asked what was ordered.

She said a pair of jeans and a shirt. For $250? Damn, highbrow indeed. When I inquired about the size that was was the kicker. 30 waist and 32 length on the jeans. 30x32. Now I’ve dropped some weight, but I think the last time I was in size 30 pants was third grade or so. And the shirt was a Small. I think I outgrew small shirts in second grade. Clearly the clothes weren’t for me. The IP address of the order was Cumming, GA – about 10 miles north of where I live, and they provided a bogus email address.

I am still a bit perplexed by the transaction – it’s not like the perpetrator would benefit from the fraud. Unless they were going to swing by my house to pick up the package when it was delivered by UPS. But they’ll never get the chance, thanks to AmEx, whose notification allowed me to cancel the order before it shipped. So I called up AmEx and asked for a replacement card. No problem – my new card will be in my hands by the time you read this.

The kicker was an email I got yesterday morning from AmEx. Turns out they already updated my card number in Apple Pay, even though I didn’t have the new card yet. So I could use my new card on my fancy phone and get a notification when I used it.

And maybe I will even buy some pants from Ben Sherman to celebrate my new card. On second thought, probably not – I’m not really a highbrow type…

–Mike


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Applied Threat Intelligence

Network Security Gateway Evolution

Security and Privacy on the Encrypted Network

Newly Published Papers


Incite 4 U

  1. It’s about applying the threat intel: This post on the ThreatConnect blog highlights an important aspect that may get lost in the rush to bring shiny threat intelligence data to market. As lots of folks, notably Rick Holland and yours truly, have been saying for a while. It’s not about having the data. It’s about using it. The post points out that data is data. Without understanding how it can be applied to your security program, it’s just bits. That’s why my current series focuses on using threat intel within security monitoring, incident response, and preventative controls. Rick’s written a bunch of stuff making similar points, including this classic about how vendors always try to one-up each other. I’m not saying you need (yet another) ‘platform’ to aggregate threat intel, but you definitely need a strategy to make the best use of data within your key use cases. – MR

  2. Good enough: I enjoyed Gilad Parann-Nissany’s post on 10 Things You Need To Know about HIPAA Compliance in the Cloud as generic guidance for PHI security in the cloud. But his 10th point really hits the mark: HIPAA is not feared at all. The vast majority of HIPAA fines have been for physical disclosure of PHI, not electronic. While a handful of firms go out of their way to ensure their cloud infrastructure is secure (which we applaud), they aren’t doing security because of HIPAA. Few cloud providers go beyond encrypting data stores (whatever that means) and securing public network connections, because that’s good enough to avoid major fines. Sometimes “good enough” is just that. – AL

  3. 20 Questions: Over the years I have been management or, at Gartner, part of a hiring committee at various times. I have not, however, had to really interview for most of my jobs (at least not normal interviews). The most interesting situation was the hiring process at the FBI. That interview was so structured that the agents had to go through special training just to give it. They tested me not only on answering the questions, but answering them in the proper way, as instructed at the beginning, in the proper time window. (I passed, but was cut later either due to budget reductions at the time, or some weirdness in my background. Even though I eliminated all witnesses, I swear!). But I have always struggled a bit a getting technical hires right, especially in security. The best security pros I know have broad knowledge and an ability to assimilate and correlate multiple kinds of information. I really like Richard Bejtlich’s hiring suggestion. Show them a con video, and have them explain the ins and outs and interpret it. That sure beats the programming tests I used when running dev shops because it gives you great insight into their thought process and what they think is important. – RM

  4. Mixed results: IBM is touting a technology called Identity Mixer as a way for users to both conceal sensitive attributes of their identity, and as a secure content delivery mechanism. But this approach is really Digital Rights Management – which essentially means encryption. This approach has been tried many times for both content delivery and user data protection. The issue is that when allowing a third party to decrypt or access any protected data, the data must be decrypted and removed from its protection. If you use this technology to deliver videos or music it is only as secure as the users who access the data. This approach works well enough for DirecTV because they control the hardware and software ecosystem, but falls apart in conventional cases where the user controls the endpoint. Similarly, sharing encrypted data and keys with a third party defeats the point. – AL

  5. Follow the money: I thought about calling this one “Protection racket”, but even the CryptoLocker guys actually unlock your stuff when you pay them, as promised. It turns out the AdBlock Plus folks take money from Microsoft, Google, and Amazon to allow their ads through. The company’s business model is built on whitelisting ‘good’ ads that comply with their policies (which often includes payment to the AdBlock Plus developers). And they do acknowledge this on their site. That change was made around the end of January 2014 (thank you, Internet Archive). I get it, everyone needs to make money, and not all ads are bad. Many good sites rely on them, although that’s a rough business. I would actually stop blocking most ads if they would stop tracking me even when I don’t click on them. But a business model like this is dangerous. A company becomes beholden to financial interests which don’t necessarily align with its users’. That’s one reason I have been so excited by Apple seeing privacy of customer data as a competitive advantage – as much as companies commit to grand ideals (such as “Don’t be evil.”), it sure is easier to stick to them when they help you make piles of money. – RM

  6. Hack your apps (before the other guys do): This has been out there for a while, but it’s disturbing nonetheless. Marriott collected lots of private information about customers, which isn’t a problem. Unless that information is accessible via a porous mobile app – as it was. I know many organizations take their mobile apps seriously, treating them just like other Internet-facing assets in terms of security. It may be a generalization but that last statement cuts both ways. Organizations that take security seriously do so on any customer-facing technology – with security assessments and penetration tests. And those that don’t… probably don’t. Just understand that mobile apps are a different attack vector, and we will see different ways to steal information. So hack your own apps – otherwise an adversary will. – MR

—Mike Rothman

Applied Threat Intelligence: Use Case #3, Preventative Controls

By Mike Rothman

So far, as we have looked to apply threat intelligence to your security processes, we have focused on detection/security monitoring and investigation/incident response functions. Let’s jump backwards in the attack chain to take a look at how threat intelligence can be used in preventative controls within your environment.

By ‘preventative’ we mean any control that is in the flow, and can therefore prevent attacks. These include:

  1. Network Security Devices: These are typically firewalls (including next-generation models), and intrusion prevention systems. But you can also include devices such as web application firewalls, which operate at different levels in the stack but are inline and can thus block attacks.
  2. Content Security Devices/Services: Web and email filters can also function as preventative controls because they inspect traffic as it passes through and can enforce policies/block attacks.
  3. Endpoint Security Technologies: Protecting an endpoint is a broad category, and can include traditional endpoint protection (anti-malware) and new-fangled advanced endpoint protection technologies such as isolation and advanced heuristics. We described the current state of endpoint security in our Advanced Endpoint Protection paper, so check that out for detail on the technologies.

TI + Preventative Controls

Once again we consider how to apply TI through a process map. So we dust off the very complicated Network Security Operations process map from NSO Quant, simplify a bit, and add threat intelligence.

TI + Preventative Controls

Rule Management

The process starts with managing the rules that underlie the preventative controls. This includes attack signatures and the policies & rules that control attack response. The process trigger will probably be a service request (open this port for that customer, etc.), signature update, policy update, or threat intelligence alert (drop traffic from this set of botnet IPs). We will talk more about threat intel sources a bit later.

  1. Policy Review: Given the infinite variety of potential monitoring and blocking policies available on preventative controls, keeping the rules current is critical. Keep the severe performance hit (and false positive implications) of deploying too many policies in mind as you decide what policies to deploy.
  2. Define/Update/Document Rules: This next step involves defining the depth and breadth of the security policies, including the actions (block, alert, log, etc.) to take if an attack is detected – whether via rule violation, signature trigger, threat intelligence, or another method. Initial policy deployment should include a Q/A process to ensure no rules impair critical applications’ ability to communicate either internally or externally.
  3. Write/Acquire New Rules: Locate the signature, acquire it, and validate the integrity of the signature file(s). These days most signatures are downloaded, so this ensures the download completed properly. Perform an initial evaluation of each signature to determine whether it applies within your organization, what type of attack it detects, and whether it is relevant in your environment. This initial prioritization phase determines the nature of each new/updated signature, its relevance and general priority for your organization, and any possible workarounds.

Change Management

In this phase rule additions, changes, updates, and deletions are handled.

  1. Process Change Request: Based on the trigger within the Content Management process, a change to the preventative control(s) is requested. The change’s priority is based on the nature of the rule update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models are anticipated.
  2. Test and Approve: This step includes development of test criteria, performance of any required testing, analysis of results, and release approval of the signature/rule change once it meets your requirements. This is critical if you are looking to automate rules based on threat intelligence, as we will discuss later in the post. Changes may be implemented in log-only mode to observe their impact before committing to blocking mode in production (critical for threat intelligence-based rules). With an understanding of the impact of the change(s), the request is either approved or denied.
  3. Deploy: Prepare the target devices for deployment, deliver the change, and return them to normal operation. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption to production systems.
  4. Audit/Validate: Part of the full process of making the change is not only having the Operations team confirm it during the Deploy step, but also having another entity (internal or external, but not part of Ops) audit it to provide separation of duties. This involves validating the change to ensure the policies were properly updated and matching it against a specific request. This closes the loop and ensures there is a documentation trail for every change. Depending on how automated you want this process to be this step may not apply.
  5. Monitor Issues/Tune: The final step of the change management process involves a burn-in period when each rule change is scrutinized for unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. For threat intelligence-based dynamic rules false positives are the issue of most concern. The testing process in the Test and Approve step is intended to minimize these issues, but there are variances between test environments and production networks so we recommend a probationary period for each new or updated rule, just in case.

Automatic Deployment

The promise of applied threat intelligence is to have rules updated dynamically per intelligence gleaned from outside your organization. It adds a bit of credibility to “getting ahead of the threat”. You can never really get ‘ahead’ of the threat, but certainly can prepare before it hits you. But security professionals need to accustom themselves to updating rules from data.

We joke in conference talks about how security folks hate the idea of Skynet tweaking their defenses. There is still substantial resistance to updating access control rules on firewalls or IPS blocking actions without human intervention. But we expect this resistance to ebb as cloud computing continues to gain traction, including in enterprise environments. The only way to manage an environment at cloud speed and scale is with automation. So automation is the reality in pretty much every virtualized environment, and making inroads in non-virtual security as well.

So what can you do to get comfortable with automation? Automate things! No, we aren’t being cheeky. You need to start simple – perhaps by implementing blocking rules based on very bad IP reputation scores. Monitor your environment closely to ensure minimal false positives. Tune your rules if necessary, and then move on to another use case.

Not all situations where automated response make sense are related to threat intelligence. In case of a data breach, lockdown, or zero-day attack (either imminent or in progress), you might want to implement (temporary) blocks or workarounds automatically based on predefined policies. For example if you detect a device or cloud instance acting strangely, you can automatically move it to a quarantine network (or security zone) for investigation. This doesn’t (and shouldn’t) require human intervention, so long as you are comfortable with your trigger criteria.

Useful TI

Now let’s consider collecting external data useful for preventing attacks. This includes the following types of threat intelligence:

  • File Reputation: Reputation can be thought of as just a fancy term for traditional AV, but whatever you call it, malware proliferates via file transmission. Polymorphic malware does make signature matching much harder, but not impossible. The ability to block known-bad files close to the edge of the network is valuable – the closer to the edge, the better.
  • Adversary Networks: Some networks are just no good. They are run by non-reputable hosting companies who provide a safe haven for spammers, bot masters, and other online crime factions. There is no reason your networks should communicate with these kinds of networks. You can use a dynamic list of known bad and/or suspicious addresses to block ingress and egress traffic. As with any intelligence feed, you should monitor effectiveness, because known-bad networks changes every second of every day and keeping current is critical.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices, especially on endpoints. The shiny new term for an attack signature is Indicator of Compromise. But whatever you call it, an IoC is a handy machine-readable way to identify registry, configuration, or system file changes that indicate what malicious code does to devices – which is why we call it a malware indicator. This kind of detailed telemetry from endpoints and networks enables you to prevent attacks on the way in, and take advantage of others’ misfortune.
  • Command and Control Patterns: One specialized type of adversary network detection is intelligence on Command and Control (C&C) networks. These feeds track global C&C traffic to pinpoint malware originators, botnet controllers, and other IP addresses and sites to watch for as you monitor your environment.
  • Phishing Sites: Current advanced attacks tend to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically find email the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and their tactics, and give you something to block on your web filters and email security services.

Ensuring your job security is usually job #1, so iterate and tune processes aggressively. Do add new data sources and use cases, but not too fast. Make sure you don’t automate a bad process, which causes false positives and system downtime. Slow and steady wins this race.

We will wrap up this series with our next post, on building out your threat intelligence gathering program. With all these cool use cases for leveraging TI to improve your security program, you should make sure it is reliable, actionable, and relevant.

—Mike Rothman

Friday, January 30, 2015

Summary: Heads up

By Rich

Rich here.

Last week I talked about learning to grind it out. Whether it’s a new race distance, or plowing through a paper or code that isn’t really flowing, sometimes you need to just put your head down, set a pace, and keep moving.

And sometimes that’s the absolute worst thing to do.

I have always been a natural sprinter; attracted both to sports and other achievements I could rocket through with sheer speed and power. I was horrible at endurance endeavors (physical and mental) as a kid and into my early 20’s. I mean, not “pretending to be humble horrible” but “never got the Presidential Physical Fitness thing because I couldn’t run a mile worth a crap” horrible.

And procrastinating? Oh my. I had, I shit you not, a note in my file at the University of Colorado not to “cut him any breaks” because I so thoroughly manipulated the system for so long. (8 years of continuous undergrad… you make a few enemies on the way). It was handwritten on a Post-It, right on my official folder.

It was in my mid-20’s that I gained the mental capacity for endurance. Mountain rescue was the biggest motivator because only a small percentage of patients fell near roads. I learned to carry extremely heavy loads over long distances, and then take care of a patient at the end. You can’t rely on endurance – we used to joke that our patients were stable or dead, since it isn’t like we could just scoop them off the road (mostly).

Grinding is essential, but can be incredibly unproductive if you don’t pop your head up every now and then. Like the time we were on a physically grueling rescue, at about 11,000’, at night, in freezing rain, over rough terrain. Those of us hauling the patient out were turning into zombies, but someone realized we were hitting the kind of zone where mistakes are made, people get hurt, and it was time to stop.

Like I said before: “stable or dead”, and this guy was relatively stable. So we stopped, a couple team members bunkered in with him for the night, and we managed to get a military helicopter for him in the morning. (It may have almost crashed, but we won’t talk about that.)

It hadn’t occurred to me to stop; I was too deep in my inner grind, but it was the right decision. Just like the problem I was having with some code last year. It wouldn’t work, no matter what I did, and I kept trying variation after variation. I hit help forums, chat rooms, you name it.

Then I realized it wasn’t me, it was a bug (this time) in the SDK I was using. Only when I tried to solve the problem from an entirely new angle, instead of trying to fix the syntax, did I figure it out. The cloud, especially, is funny that way. Function diverges from documentation (if there is any) much more than you’d think. Just ask Adrian about AWS SNS and undocumented, mandatory, account IDs.

In security we can be particularly prone to grinding it out. Force those logs into the SIEM, update all the vulnerable servers before the auditor comes back, clear all the IDS alerts.

But I think we are at the early edge of a massive transition, where popping our heads up to look for alternatives might be the best approach. ArcSight doesn’t have an AWS CloudTrail connector? Check out a hybrid ELK stack or cloud-native SIEM. Tired of crash patching for the next insert pseudo-cool name here vulnerability? Talk to your developers about autoscaling and continuous deployment.

Every year I try to block out a week, or at least a few half-days, to sit back, focus on research, and see which of my current assumptions and work patterns are wrong or no longer productive. Call it “active resting”. I think I have come up with some cool stuff for this year, both in my work habits and security controls. Now I just need time to play with the code and configurations, to see if any of it actually works.

But unlike my old patients, my code and writing seem to be both unstable and dead, so I won’t get my hopes too high.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

—Rich

Wednesday, January 28, 2015

Incite 1/28/2015: Shedding Your Skin

By Mike Rothman

You are constantly changing. We all are. You live, you learn, you adapt, you change. It seems that if you pay attention, every 7-9 years or so you realize you hardly recognize the person looking back at you from the mirror. Sometimes the changes are very positive. Other times a cycle is not as favorable. That’s part of the experience. Yet many people don’t think anything changes. They expect the same person year after year.

I am a case in point. I have owned my anger issues from growing up and my early adulthood. They resulted in a number of failed jobs and relationships. It wasn’t until I had to face the reality that my kids would grow up in fear of me that I decided to change. It wasn’t easy, but I have been working at it diligently for the past 8 years, and at this point I really don’t get angry very often.

Done with this skin says the snake

But lots of folks still see my grumpy persona, even though I’m not grumpy. For example I was briefing a new company a few weeks ago. We went through their pitch, and I provided some feedback. Some of it was hard for them to hear because their story needed a lot of work. At some point during the discussion, the CEO said, “You’re not so mean.” Uh, what? It turns out the PR handlers had prepared them for some kind of troll under the bridge waiting to chew their heads off.

At one point I probably was that troll. I would say inflammatory things and be disagreeable because I didn’t understand my own anger. Belittling others made me feel better. I was not about helping the other person, I was about my own issues. I convinced myself that being a douche was a better way to get my message across. That approach was definitely more memorable, but not in a positive way. So as I changed my approach to business changed as well. Most folks appreciate the kinder Incite I provide. Others miss crankypants, but that’s probably because they are pretty cranky themselves and they wanted someone to commiserate over their miserable existence.

What’s funny is that when I meet new people, they have no idea about my old curmudgeon persona. So they are very surprised when someone tells a story about me being a prick back in the day. That kind of story is inconsistent with what they see. Some folks would get offended by hearing those stories, but I like them. It just underscores how years of work have yielded results.

Some folks have a hard time letting go of who they thought you were, even as you change. You shed your skin and took a different shape, but all they can see is the old persona. But when you don’t want to wear that persona anymore, those folks tend to move out of your life. They need to go because don’t support your growth. They hold on to the old.

But don’t fret. New people come in. Ones who aren’t bound by who you used to be – who can appreciate who you are now. And those are the kinds of folks you should be spending time with.

–Mike

Photo credit: “Snake Skin” originally uploaded by James Lee


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Applied Threat Intelligence

Network Security Gateway Evolution

Security and Privacy on the Encrypted Network

Newly Published Papers


Incite 4 U

  1. Click. Click. Boom! I did an interview last week where I said the greatest security risk of the Internet of Things is letting it distract you from all of the other more immediate security risks you face. But the only reason that is even remotely accurate is because I don’t include industrial control systems, multifunction printers, or other more traditional ‘things’ in the IoT. But if you do count everything connected to the Internet, some real problems pop up. Take the fuel gauge vulnerability just released by H D Moore/Rapid 7. Scan the Internet, find hundreds of vulnerable gas stations, all of which could cause real-world kinetic-style problems. The answer always comes back to security basics: know the risk, compartmentalize, update devices, etc. Some manufacturers are responsible, others not so much, and as a security pro it is worth factoring this reality into your risk profile. You know, like, “lightbulb risk: low… tank with tons of explosive liquid: high”. – RM

  2. How fast is a fast enough response? Richard Bejtlich asks a age-old question. How quickly should incidents be responded to? When he ran a response team the mandate was detection and mitigation in less than an hour. And this was a huge company, staffed to meet that service level. They had processes and tools to provide that kind of response. The fact is you want to be able to respond as quickly as you are staffed. If you have 2 people and a lot of attack surface, it may not be realistic to respond in an hour. If senior management is okay with that, who are you to argue? But that’s not my pet peeve. It’s the folks who think they need to buy real-time alerts when they aren’t staffed to investigate and remediate. If you have a queue of stuff to validate from your security monitors, then getting more alerts faster doesn’t solve any problems. It only exacerbates them. So make sure your tools are aligned with your processes, which are aligned with your staffing level and expertise. Or see your alerts fall on the floor, whether you are a target or not. – MR

  3. Positive reviews: What do you do if you think the software you’re using might have been compromised by hostile third parties? You could review the source code to see if it’s clean. It’s openness that encouraged enterprises to trust non-commercial products, right? But what if it’s a huge commercial distribution, and not open source? If you are talking about Microsoft’s or Apple’s OS code, not only is it extremely tough (like, impossible) to get access, but any effort to review the code would be monstrous and not feasible. In what I believe is unprecedented access, China has gotten the okay to search Apple’s software for back doors to give them confidence that no foreign power has manipulated the code. But this won’t be limited to code – it includes an investigation of build and delivery processes as well, to ensure that substitutions don’t occur along the way. A likely – and very good – outcome for Apple (given the amount of business they do in China), and the resulting decreased pressure from various governments to insert backdoors into the software. – AL

  4. Sec your aaS: One weird part of our business that has cropped up in the past year is working more with SaaS companies who actually care about security. Some big names, many smaller ones, all realizing they are a giant target for every attacker. But I’d have to say these SaaS providers are the minority. Most just don’t have money in the early stages (when it’s most important to build in security) to drop the cash for someone like me to walk in the door. So I enjoyed seeing Bessemer Venture Partners publish a startup security guide. More VCs and funds should provide this kind of support, because their investment goes poof if their companies suffer a major data loss. Or, you know, hire us to do it. – RM

  5. You fix it: It’s shocking that Chip and PIN cards, a technology proven to drastically reduce fraud rates in dozens of other countries, have not been widely adopted in the US. But it’s really sad when the US government beats the banks to market: The US is rolling out Chip and PIN cards for all federal employees this year to promote EMV compliant cards and usage in the US. Chips alleviate card cloning attacks and PINs thwart use of stolen cards. In the EU adoption of Chip and PIN has virtually eliminated card-present fraud. But the people who would benefit the most – banks – don’t bear the costs of deploying and servicing Chip and PIN; issuers and merchants do. So each party acts in its own best interest. Leading by example is great, but if the US government wanted to really promote Chip and PIN, they would help broker (or mandate) a deal among these stakeholders to fix the systemic problem. – AL

  6. Same problem. Different technology… During his day job as a Gartner analyst, Anton gets the same questions over and over again. Both Rich and I know that situation very well. He posted about folks now asking for security analytics, but really wonders whether they just want a SIEM that works. That is actually the wrong question. What customers want are security alerts that help them do their jobs. If their SIEM provided it they wouldn’t be looking at shiny new technologies like big data security analytics and other buzzword-friendly new products. Customers don’t care what you call it, they care about outcomes – which is that they have no idea which alerts matter. But that’s Vendor 101: if the existing technology doesn’t solve the problem, rename the category and sell hope to customers all over again. And the beat goes on. Now back on my anti-cynicism meds. – MR

—Mike Rothman

Tuesday, January 27, 2015

Applied Threat Intelligence: Use Case #2, Incident Response/Management

By Mike Rothman

As we continue with our Applied Threat Intelligence series, let us now look at the next use case: incident response/management. Similar to the way threat intelligence helps with security monitoring, you can use TI to focus investigations on the devices most likely to be impacted, and help to identify adversaries and their tactics to streamline response.

TI + IR/M

As in our last post, we will revisit the incident response and management process, and then figure out which types of TI data can be most useful and where.

Threat Intelligence and Incident Response

You can get a full description of all the process steps in our full Leveraging TI in Incident Response/Management paper.

Trigger and escalate

The incident management process starts with a trigger kicking off a response, and the basic information you need to figure out what’s going on depends on what triggered the alert. You may get alerts from all over the place, including monitoring systems and the help desk. But not all alerts require a full incident response – much of what you deal with on a day-to-day basis is handled by existing security processes.

Where do you draw the line between a full response and a cursory look? That depends entirely on your organization. Regardless of the criteria you choose, all parties (including management, ops, security, etc.) must be clear on which situations require a full investigation and which do not, before you can decide whether to pull the trigger. Once you escalate an appropriate resource is assigned and triage begins.

Triage

Before you do anything, you need to define accountabilities within the team. That means specifying the incident handler and lining up resources based on the expertise needed. Perhaps you need some Windows gurus to isolate a specific vulnerability in XP. Or a Linux jockey to understand how the system configurations were changed. Every response varies a bit, and you want to make sure you have the right team in place.

As you narrow down the scope of data needing analysis, you might filter on the segments attacked or logs of the application in question. You might collect forensics from all endpoints at a certain office, if you believe the incident was contained. Data reduction is necessary to keep the data set to investigate manageable.

Analyze

You may have an initial idea of who is attacking you, how they are doing it, and their mission based on the alert that triggered the response, but now you need to prove that hypothesis. This is where threat intelligence plays a huge role in accelerating your response. Based on indicators you found you can use a TI service to help identify a potentially responsible party, or more likely a handful of candidates. You don’t need legal attribution, but this information can help you understand the attacker and their tactics.

Then you need to size up and scope out the damage. The goal here is to take the initial information provided and supplement it quickly to determine the extent and scope of the incident. To determine scope dig into the collected data to establish the systems, networks, and data involved. Don’t worry about pinpointing every affected device at this point – your goal is to size the incident and generate ideas for how best to mitigate it. Finally, based on the initial assessment, use your predefined criteria to decide whether a formal investigation is in order. If yes, start thinking about chain of custody and using some kind of case management system to track the evidence.

Quarantine and image

Once you have a handle (however tenuous) on the situation you need to figure out how to contain the damage. This usually involves taking the device offline and starting the investigation. You could move it onto a separate network with access to nothing real, or disconnect it from the network altogether. You could turn the device off. Regardless of what you decide, do not act rashly – you need to make sure things do not get worse, and avoid destroying evidence. Many malware kits (and attackers) will wipe a device if it is powered down or disconnected from the network, so be careful.

Next you take a forensic image of the affected devices. You need to make sure your responders understand how the law works in case of prosecution, especially what provides a basis for reasonable doubt in court.

Investigate

All this work is really a precursor to the full investigation, when you dig deep into the attack to understand what exactly happened. We like timelines to structure your investigation, as they help you understand what happened and when. Start with the initial attack vector and follow the adversary as they systematically moved to achieve their mission. To ensure a complete cleanup, the investigation must include pinpointing exactly which devices were affected and reviewing exfiltrated data via full packet capture from perimeter networks.

It turns out investigation is more art than science, and you will never actually know everything, so focus on what you do know. At some point a device was compromised. At another subsequent point data was exfiltrated. Systematically fill in gaps to understand what the attacker did and how. Focus on completeness of the investigation – a missed compromised device is sure to mean reinfection somewhere down the line. Then perform a damage assessment to determine (to the degree possible) what was lost.

Mitigation

There are many ways to ensure the attack doesn’t happen again. Some temporary measures include shutting down access to certain devices via specific protocols or locking down traffic in and out of critical servers. Or possibly blocking outbound communication to certain regions based on adversary intelligence. Also consider more ‘permanent’ mitigations, such as putting in place a service or product to block denial of service attacks.

Once you have a list of mitigation activities you marshal operational resources to work through it. We favor remediating affected devices in one fell swoop (big bang), rather than incremental cleaning/reimaging. We have found it more effective to eradicate the adversary from your environment as quickly as possible because a slow cleanup provides opportunity for them to dig deeper.

The mitigation is complete once you have halted the damage and regained the ability to continue operations. Your environment may not be pretty as you finish the mitigation, with a bunch of temporary workarounds to protect information and make sure devices are no longer affected. Make sure to always favor speed over style because time is of the essence.

Clean up

Now take a step back and clean up any disruptions to normal business operations, making sure you are confident that particular attack will never happen again. Incident managers focus on completing the investigation and cleaning out temporary controls, while Operations handles updating software and restoring normal operations. This could mean updating patches on all systems, checking for and cleaning up malware, restoring systems from backup and bringing them back up to date, etc.

Postmortem

Your last step is to analyze the response process itself. What can you identify as opportunities for improvement? Should you change the team or your response technology (tools)? Don’t make the same mistakes again, and be honest with yourselves about what needs to improve.

You cannot completely prevent attacks, so the key is to optimize your response process to detect and manage problems as quickly and efficiently as possible, which brings us full circle back to threat intelligence. You also need to learn about your adversary during this process. You were attacked once and will likely be attacked again. Use threat intelligence to drive the feedback loop and make sure your controls change as often as needed to be ready for adversaries.

Useful TI

Now let’s delve into collecting the external data that will be useful to streamline investigation. This involves gathering threat intelligence, including the following types:

  • Compromised devices: The most actionable intelligence you can get is a clear indication of compromised devices. This provides an excellent place to begin investigation and manage your response. There are many ways you might conclude a device is compromised. The first is clear indicators of command and control traffic coming from the device, such as DNS requests whose frequency and content indicate a domain generating algorithm (DGA) to locate botnet controllers. Monitoring network traffic from the device can also catch files or other sensitive data being transmitted, indicating exfiltration or a remote access trojan.
  • Malware indicators: You can build a lab and perform both static and dynamic analysis of malware samples to identify specific indications of how malware compromises devices. This is a major commitment (as we described in Malware Analysis Quant) – thorough and useful analysis requires significant investment, resources, and expertise. The good news is that numerous commercial services now offer those indicators in formats you can use to easily search through collected security data.
  • Adversary networks: IP reputation data can help you determine the extent of compromise, especially if it is broken up into groups of adversaries. If during your initial investigation you find malware typically associated with Adversary A, you can look for traffic going to networks associated with that adversary. Effective and efficient response requires focus, and knowing which devices may have been compromised in a single attack helps isolate and dig deeper into that attack.

Given the reality of scarce resources on the security team, many organizations select a commercial provider to develop and provide this threat intelligence, or leverage data provided as part of a product or service. Stand-alone threat intelligence is typically packaged as a feed for direct integration into incident response/monitoring platforms. Wrapping it all together produces the process map above. This map encompasses profiling the adversary, collecting intelligence, analyzing threats, and then integrating threat intelligence into incident response.

Action requires automation

The key to making this entire process run is automation. We talk about automation a lot these days, with good reason. Things happen too quickly in technology infrastructure to do much of anything manually, especially in the heat of an investigation. You need to pull threat intelligence in a machine-readable format, and pump it into an analysis platform without human intervention.

But that’s not all. In our next post we will discuss how to use TI within active controls to proactively block attacks. What? That was pretty difficult to write, given our general skepticism about really preventing attacks, but new technology is beginning to make a difference.

—Mike Rothman