Login  |  Register  |  Contact


Tuesday, January 14, 2014

Firestarter: Crisis Communications

By Rich

Okay, we have content in this thing. We promise. But we can’t stop staring at our new title video sequence. I mean, just look at it!

This week Rich, Mike, and Adrian discuss Target, Snapchat, RSA, and why no one can get crisis communications correct.

Sorry we hit technical difficulties with the live Q&A Friday, but we think we have the kinks worked out (I’d blame Mike if I were inclined to point fingers). Our plan is to record Friday again – keep an eye on our Google+ page for the details.


Monday, July 22, 2013

Apple Developer Site Breached

By Rich

From CNet (and my inbox, as a member of the developer program):

Last Thursday, an intruder attempted to secure personal information of our registered developers from our developer website. Sensitive personal information was encrypted and cannot be accessed, however, we have not been able to rule out the possibility that some developers’ names, mailing addresses, and/or email addresses may have been accessed. In the spirit of transparency, we want to inform you of the issue. We took the site down immediately on Thursday and have been working around the clock since then.

One of my fellow TidBITS writers noted the disruption on our staff list after the site had been down for over a day with no word. I suspected a security issue (and said so), in large part due to Apple’s complete silence – even more than usual. But until they sent out this notification, there were no facts and I don’t believe in speculating publicly on breaches without real information.

Three key questions remain:

  1. Were passwords exposed?
  2. If so, how were they encrypted/protected? A password hash or something insecure for this purpose, such as SHA-256?
  3. Were any Apple Developer ID certificates exposed?

Those are the answers that will let developers assess their risk. At this point assume names, emails, and addresses are in the hands of attackers, and could be used for fraud, phishing, and other attacks.


Wednesday, March 13, 2013

A Brief Privacy Breach History Lesson

By Rich

The big ChoicePoint breach of 2004 was the result of criminals creating false business accounts and running credit reports on hundreds of thousands of customers (probably). Every major credit/background company has experienced this kind of breach of service going back decades – just look at the Dataloss DB.

Doxxing isn’t necessarily hacking, even when technology is involved. All this has happened before, and all this will happen again. Even very smart humans are fooled every day.


Wednesday, February 27, 2013

Bit9 Details Breach

By Rich

Bit9 released more details of how they were hacked.

The level of detail is excellent, and there seems to be minimal or no spin. There are a couple additional details it might be valuable to see (specifics of the SQL injection and how user accounts were compromised), but overall the post is clear, with a ton of specifics on some of what they are finding.

More security vendors should be open and disclose with at least this level of detail. Especially since we know many of you cover up incidents. When we are eventually breached, I will strive to disclose all the technical details.

I gave Bit9 some crap when the breach first happened (due to some of their earlier marketing), but I can’t fault how they are now opening up.


Tuesday, February 26, 2013

When is a Hack a Breach?

By Rich

As the hubbub over Apple, Twitter, and Facebook being hacked with the Java flaw slowly ebbs, word hit late last week that Microsoft was also hit in the attack. Considering the nature of the watering hole attack, odds are that many many other companies have been affected.

This begs the question: does it matter? The headlines screamed “Apple and Facebook Hacked”, and technically that’s true. But as I wrote in the Data Breach Triangle, it isn’t really a breach unless the attacker gets in, steals or damages something, and gets out. Lockheed uses the same principle with its much-sexier-named Kill Chain.

Indications are that Apple and Microsoft, and possibly Facebook, all escaped unscathed. Some developers’ computers were exploited, the bad guys got in, they were detected, and nothing bad happened. I do not know if that was the full scope of the exploits, but it isn’t unrealistic, and successful hacks that aren’t full-on breaches happen all the time.

I care about outcomes. And someone bypassing some controls but being stopped is what defense in depth is all about. But you rarely see that in the headlines, or even in many of our discussions in the security world.

It is the exact reason I didn’t really write about the hacks here before – from what I could tell some of the vendors disclosed only because they knew it probably would have come out once the first disclosure happened, because their use of the site was public.


Friday, February 15, 2013

Facebook Hacked with Java Flaw

By Rich

It’s Friday, so here is a quick link to The Verge’s latest. Developers infected via Java in the browser from a developer info site.

You get the hint? Do we need to say anything else?

Didn’t think so.


Friday, February 08, 2013

Karma is a Bit9h

By Rich

First reported by Brian Krebs (as usual), security vendor Bit9 was compromised and used to infect their customers.

But earlier today, Bit9 told a source for KrebsOnSecurity that their corporate networks had been breached by a cyberattack. According to the source, Bit9 said they’d received reports that some customers had discovered malware inside of their own Bit9-protected networks, malware that was digitally signed by Bit9’s own encryption keys.

They posted more details on their site after notifying customers:

In brief, here is what happened. Due to an operational oversight within Bit9, we failed to install our own product on a handful of computers within our network. As a result, a malicious third party was able to illegally gain temporary access to one of our digital code-signing certificates that they then used to illegitimately sign malware. There is no indication that this was the result of an issue with our product. Our investigation also shows that our product was not compromised.

We simply did not follow the best practices we recommend to our customers by making certain our product was on all physical and virtual machines within Bit9.

Our investigation indicates that only three customers were affected by the illegitimately signed malware. We are continuing to monitor the situation. While this is an incredibly small portion of our overall customer base, even a single customer being affected is clearly too many.

No sh**.

Bit9 is a whitelisting product. This sure is one way to get around it, especially since customers cannot block Bit9 signed binaries even if they want to (well, not using Bit9, at least). This could mean the attackers had good knowledge of the Bit9 product and then used the signed malware to only attack Bit9 customers. The scary part of this? Attackers were able to enumerate who was using Bit9 and target them. But this kind of tool should be hard to discover running in the first place, unless you are already in the front door. This enumeration could have been either before or after the attack on Bit9, and that’s a heck of an interesting question we probably won’t ever an answer to.

This smells very similar to the Adobe code signing compromise back in September, except that was clearly far less targeted.

Every security product adds to the attack surface. Every security vendor is now an extended attack surface for all their clients. This has happened before, and I suspect will only grow, as Jeremiah Grossman explained so well.

All the security vendors now relishing the fall of a rival should instead poop their pants and check their own networks.

Oh, and courtesy our very own Gattaca, let’s not forget this.


Saturday, February 02, 2013

Twitter Hacked

By Adrian LaneAdrian Lane,Rich

Twitter announced this evening that some 250k user accounts were compromised.

This week, we detected unusual access patterns that led to us identifying unauthorized access attempts to Twitter user data. We discovered one live attack and were able to shut it down in process moments later. However, our investigation has thus far indicated that the attackers may have had access to limited user information – usernames, email addresses, session tokens and encrypted/salted versions of passwords – for approximately 250,000 users.

Passwords and session tokens were reset to contain the problem. It is likely that personal information, including direct messages, were exposed. The post asks users to use strong passwords of at least 10 characters, and requests that they disable Java in the browser, which together provide a pretty fair indication of how the attacks were conducted. Disable Java in the browser – where have you heard that before? We will update this post as we learn more.

Update by Rich: Adrian and I both posted this within minutes. Here is my comment:

Also from the post:

This attack was not the work of amateurs, and we do not believe it was an isolated incident. The attackers were extremely sophisticated, and we believe other companies and organizations have also been recently similarly attacked. For that reason we felt that it was important to publicize this attack while we still gather information, and we are helping government and federal law enforcement in their effort to find and prosecute these attackers to make the Internet safer for all users.

Twitter has a hell of a good security team with some serious firepower, including Charlie Miller.

–Adrian LaneAdrian Lane,Rich

Monday, January 21, 2013

Don’t respond to a breach like this

By Rich

A student who legitimately reported a security breach was expelled from college for checking to see whether the hole was fixed.

(From the original article):

Ahmed Al-Khabaz, a 20-year-old computer science student at Dawson and a member of the school’s software development club, was working on a mobile app to allow students easier access to their college account when he and a colleague discovered what he describes as “sloppy coding” in the widely used Omnivox software which would allow “anyone with a basic knowledge of computers to gain access to the personal information of any student in the system, including social insurance number, home address and phone number, class schedule, basically all the information the college has on a student.”

Two days later, Mr. Al-Khabaz decided to run a software program called Acunetix, designed to test for vulnerabilities in websites, to ensure that the issues he and Mija had identified had been corrected. A few minutes later, the phone rang in the home he shares with his parents.

It was the President of the SaaS company who forced him to sign an NDA under threat of reporting him to law enforcement, and he was then expelled.

Reactions like this have a chilling effect. They motivate discoverers to not report them, to release them publicly, or to sell or give them to someone who will use them maliciously. None of those are good. Even if it pisses you off, even if you think a line was crossed, if someone finds a flaw and tries to work with you to protect customers and users rather than using it maliciously, you need to engage with them positively. No matter how much it hurts.

Because you sure as heck don’t want to end up on the pointy end of an article like this.


Friday, January 27, 2012

Implementing DLP: Picking Priorities and a Deployment Process

By Rich

At this point you should be in the process of cleaning your directory servers, with your incident handling process outlined in case you find any bad stuff early in your deployment. Now it’s time to determine your initial priorities to figure out whether you want to start with the Quick Wins process or jump right into full deployment.

Most organizations have at least a vague sense of their DLP priorities, but translating them into deployment priorities can be a bit tricky. It’s one thing to know you want to use DLP to comply with PCI, but quite another to know exactly how to accomplish that.

On the right is an example of how to map out high-level requirements into a prioritized deployment strategy. It isn’t meant to be canonical, but should provide a good overview for most of you. Here’s the reasoning behind it:

DLP Priorities

  • Compliance priorities depend on the regulation involved. For PCI your best bet is to use DLP to scan storage for Primary Account Numbers. You can automate this process and use it to define your PCI scope and reduce assessment costs. For HIPAA the focus often starts with email to ensure no one is sending out unencrypted patient data. The next step is often to find where that data is stored – both in departments and on workstations. If we were to add a third item it would probably be web/webmail, because that is a common leak vector.
  • Intellectual Property Leaks tend to be either document based (engineering plans) or application/database based (customer lists). For documents – assuming your laptops are already encrypted – USB devices are usually one of the top concerns, followed by webmail. You probably also want to scan storage repositories, and maybe endpoints, depending on your corporate culture and the kind of data you are concerned about. Email turns out to be a less common source of leaks than the other channels, so it’s lower on the list. If the data comes out of an application or database then we tend to worry more about network leaks (an insider or an attacker), webmail, and then storage (to figure out all the places it’s stored and at risk). We also toss in USB above email, because all sorts of big leaks have shown USB is a very easy way to move large amounts of data.
  • Customer PII is frequently exposed by being stored where it shouldn’t be, so we start with discovery again. Then, from sources such as the Verizon Data Breach Investigations Report and the Open Security Foundation DataLossDB we know to look at webmail, endpoints and portable storage, and lastly email.

You will need to mix and match these based on your own circumstances – and we highly recommend using data-derived reports like the ones listed above to help align your priorities with evidence, rather than operating solely on gut feel. Then adapt based on what you know about your own organization – which may include things like “the CIO said we have to watch email”.

If you followed our guidance in Understanding and Selecting a DLP Solution you can feed the information from that worksheet into these priorities.

Now you should have a sense of what data to focus on and where to start. The next step is to pick a deployment process.

Here are some suggestions for deciding which to start with. The easy answer is to almost always start with the Quick Wins process…

  • Only start with the full deployment process if you have already prioritized what to protect, have a good sense of where you need to protect it, and believe you understand the scope you are dealing with. This is usually when you have a specific compliance or IP protection initiative, where the scope includes well-defined data and a well-defined scope (e.g., where to look for the data or monitor and/or block it).
  • For everyone else we suggest starting with the Quick Wins process. It will highlight your hot spots and help you figure out where to focus your full deployment.

We’ll discuss each of those processes in more depth later.


Thursday, June 02, 2011

A Different Take on the Defense Contractor/RSA Breach Miasma

By Rich

I have been debating writing anything on the spate of publicly reported defense contractor breaches. It’s always risky to talk about breaches when you don’t have any direct knowledge about what’s going on. And, to be honest, unless your job is reporting the news it smells a bit like chasing a hearse.

But I have been reading the stories, and even talking to some reporters (to give them background info – not pretending I have direct knowledge). The more I read, and the more I research, the more I think the generally accepted take on the story is a little off.

The storyline appears to be that RSA was breached, seed tokens for SecurID likely lost, and those were successfully used to attack three major defense contractors. Also, the generic term “hackers” is used instead of directly naming any particular attacker.

I read the situation somewhat differently:

  • I do believe RSA was breached and seeds lost, which could allow that attacker to compromise SecurID if they also know the customer, serial number of the token, PIN, username, and time sync of the server. Hard, but not impossible. This is based on the information RSA has released to their customers (the public pieces – again, I don’t have access to NDA info).
  • In the initial release RSA stated this was an APT attack. Some people believe that simply means the attacker was sophisticated, but the stricter definition refers to one particular country. I believe Art Coviello was using the strict definition of APT, as that’s the definition used by the defense and intelligence industries which constitute a large part of RSA’s customer base.
  • By all reports, SecurIDs were involved in the defense contractor attacks, but Lockheed in particular stated the attack wasn’t successful and no information was lost. If we tie this back to RSA’s advice to customers (update PINs, monitor SecurID logs for specific activity, and watch for phishing) it is entirely reasonable to surmise that Lockheed detected the attack and stopped it before it got far, or even anywhere at all. Several pieces need to come together to compromise SecurID, even if you have the customer seeds.
  • The reports of remote access being cut off seem accurate, and are consistent with detecting an attack and shutting down that vector. I’d do the same thing – if I saw a concerted attack against my remote access by a sophisticated attacker I would immediately shut it down until I could eliminate that as a possible entry point.
  • Only the party which breached RSA could initiate these attacks. Countries aren’t in the habits of sharing that kind of intel with random hackers, criminals, or even allies.
  • These breach disclosures have a political component, especially in combination with Google revealing that they stopped additional attacks emanating from China. These cyberattacks are a complex geopolitical issue we have discussed before. The US administration just released an international strategy for cybersecurity. I don’t think these breaches would have been public 3 years ago, and we can’t ignore the political side when reading the reports. Billions – many billions – are in play.

In summary: I do believe SecurID is involved, I don’t think the attacks were successful, and it’s only prudent to yank remote access and swap out tokens. Politics are also heavily in play and the US government is deeply involved, which affects everything we are hearing, from everybody.

If you are an RSA customer you need to ask yourself whether you are a target for international espionage. All SecurID customers should change out PINs, inform employees to never give out information about their tokens, and start looking hard at logs. If you think you’re on the target list, look harder. And call your RSA rep.

But the macro point to me is whether we just crossed a line. As I wrote a couple months ago, I believe security is a self-correcting system. We are never too secure because that’s more friction than people will accept. But we are never too insecure (for long at least) because society stops functioning. If we look at these incidents in the context of the recent Mac Defender hype, financial attacks, and Anonymous/Lulz events, it’s time to ask whether the pain is exceeding our thresholds.

I don’t know the answer, and I don’t think any of us can fully predict either the timing or what happens next. But I can promise you that it doesn’t translate directly into increased security budgets and freedom for us security folks to do whatever we want. Life is never so simple.


Thursday, March 24, 2011

Crisis Communications

By Rich

I realize that I have a tendency to overplay my emergency services background, but it does provide me with some perspective not common among infosec professionals. One example is crisis communications. While I haven’t gone through all the Public Information Officer (PIO) training, basic crisis communications is part of several incident management classes I have completed. I have also been involved in enough major meatspace and IT-related incidents to understand how the process goes.

In light of everything from HBGary, to TEPCO, to RSA, to Comodo, it’s worth taking a moment to outline how these things work.

And I don’t mean how they should go, but how they really play out. Mostly this is because those making the decisions at the executive level a) have absolutely no background in crisis communications, b) think they know better than their own internal experts, and c) for some strange reason tend to think they’re different and special and not bound by history or human nature.

You know – typical CEOs.

These people don’t understand that the goal of crisis communications is to control the conversation through honesty and openness, while minimizing damage first to the public, then second to your organization. Reversing those priorities almost always results in far worse impact to your organization – eventually, of course, the public eventually figures out you put them second and will make you pay for it later.

Here’s how incidents play out:

  1. Something bad happens. The folks in charge first ask, “who knows” to figure out whether they can keep it secret.
  2. They realize it’s going to leak, or already has, so they try to contain the information as much as possible. Maybe they do want to protect the public or their customers, but they still think they should keep at least some of it secret.
  3. They issue some sort of vague notification that includes phrases like, “we take the privacy/safety/security of our customers very seriously”, and “to keep our customers safe we will not be releasing further details until…”, and so on. Depending on the nature of the incident, by this point either things are under control and there is more information would not increase risk to the public, or the attack was extremely sophisticated.
  4. The press beats the crap out of them for not releasing complete information.
  5. Competitors beat the crap out of them because they can, even though they are often in worse shape and really just lucky it didn’t happen to them.
  6. Customers wait and see. They want to know more to make a risk decision and are too busy dealing with day to day stuff to worry about anything except the most serious of incidents. They start asking questions.
  7. Pundits create more FUD so they can get on TV or in the press. They don’t know more than anyone else, but they focus on worst-case scenarios so it’s easier to get headlines.
  8. The next day (or within a few hours, depending on the severity) customers start asking their account reps questions.
  9. The folks in charge realize they are getting the crap beaten out of them. They issue the second round of information, which is nearly as vague as the first, in the absurd belief that it will shut people up. This is usually when the problem gets worse.
  10. Now everyone beats the crap out of the company. They’ve lost control of the news cycle, and are rapidly losing trust thanks to being so tight-lipped.
  11. The company trickles out a drivel of essentially worthless information under the mistaken belief that they are protecting themselves or their customers, forgetting that there are smart people out there. This is usually where they use the phrase (in the security world) “we don’t want to publish a roadmap for hackers/insider threats” or (in the rest of the world), “we don’t want to create a panic”.
  12. Independent folks start investigating on their own and releasing information that may or may not be accurate, but everyone gloms onto it because there is no longer any trust in the “official” source.
  13. The folks in charge triple down and decide not to say anything else, and to quietly remediate. This never works – all their customers tell their friends and news sources what’s going on.
  14. Next year’s conference presentations or news summaries all dissect how badly the company screwed up.

The problem is that too much of ‘communications’ becomes a forlorn attempt to control information. If you don’t share enough information you lose control, because the rest of the world a) needs to know what’s going on and b) will fill in the gaps as best they can. And the “trusted” independent sources are press and pundits who thrive on hyperbole and worst-case scenarios.

Here’s what you should really do:

  1. Go public as early as possible with the most accurate information possible. On rare occasion there are pieces that should be kept private, but treat this like packing for a long trip – make a list, cut it in half, then cut it in half again, and that’s what you might hold onto.
  2. Don’t assume your customers, the public, or potential attackers are idiots who can’t figure things out. We all know what’s going on with RSA – they don’t gain anything by staying quiet. The rare exception is when things are so spectacularly fucked that even the collective creativity of the public can’t imagine how bad things are… then you might want them to speculate on a worst case scenario that actually isn’t.
  3. Control the cycle be being the trusted authority. Don’t deny, and be honest when you are holding details back. Don’t dribble out information and hope it will end there – the more you can release earlier, the better, since you then cut speculation off at the knees.
  4. Update constantly. Even if you are repeating yourself. Again, don’t leave a blank canvas for others to fill in.
  5. Understand that everything leaks. Again, better for you to provide the information than an anonymous insider.
  6. Always always put your customers and the public first. If not, they’ll know – either during the incident or later.
  7. Don’t lie or blame others, and don’t try to pretend you didn’t make mistakes. Don’t be like Comodo, try to blame Iran, and lump yourself into the same breach as RSA.

I don’t care if it’s the Tylenol scare, a security breach, or a nuclear meltdown – your job is to aggressively communicate so you don’t lose control of the cycle. Give people the information they need to make appropriate risk decisions, and it’s okay to keep certain details private… but only if they really protect someone other than yourself (e.g., sometimes you can’t say anything for legal reasons, but be honest when you can’t). Acting like a patronizing parent never seems to work out as well as you hope.

RSA could have controlled the cycle… especially since they were disclosing on their own terms, rather than responding to external discovery. But while they probably thought they were being responsible and releasing the right amount of information, they released just enough to kick the spin into overdrive; create doubt among their customers and the public; and allow pundits, press, and competitors to take control of the cycle… even though none of them has all the information.

There are exceptions to these rules. But not for you.


Monday, March 21, 2011

RSA Releases (Almost) More Information

By Rich

As this is posting, RSA is releasing a new SecureCare note and FAQ for their clients (Login required). This provides more specific prioritized information on what mitigations they recommend SecurID clients take.

To be honest they really should just come clean at this point. With the level of detail in the support documents it’s fairly obvious what’s going on. These notes are equivalent to saying, “we can’t tell you it’s an elephant, but we can confirm that it is large, grey, and capable of crushing your skull if you lay down in front of it. Oh yeah, and it has a trunk and hates mice.”

So let’s update what we know, what we don’t, what you should do, and the open questions from our first post:

What we know

Based on the updated information… not much we didn’t before.

But I believe RSA understands the strict definition of APT and isn’t using the term to indicate a random, sophisticated attack. So we can infer who the actor is – China – but RSA isn’t saying and we don’t have confirmation.

In terms of what was lost, the answer is, “an elephant” even if they don’t want to say so. This means either customer token records or something similar, and I can’t think of what else it could be. Here’s a quote from them that makes it almost obvious:

To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information.

If it were a compromise of the authentication server software itself, that statement wouldn’t be accurate. Also, one of their top recommendations is to use long, complex PINs. They wouldn’t say that if the server was compromised, which means it pretty much has to be related to customer token records.

This also leads us to understand the nature of a potential attack. The attacker would need to know the username, password/PIN, and probably the individual assigned token. Plus they need some time and luck. While extremely serious for high-value targets, this does limit potential exposure. This also explains their recommendations on social engineering, hardening the authentication server, setting PIN lockouts, and checking logs for ongoing bad token/authentication requests.

I think his name is Babar.

What we don’t know

We don’t have any confirmation of anything at this point, which is frankly silly unless we are missing some major piece of the puzzle.

Until then it’s reasonable to assume a single sophisticated attacker (with a very tasty national cuisine), and compromise of token seeds/records. This reduces the target pool and means most people should be in good shape with the practices we previously recommended (updated below).

One big unknown is when this happened. That’s important, especially for high-value targets, as it could mean they have been under attack for a while, and opponents might have harvested some credentials via social engineering or other means already.

We also don’t know why RSA isn’t simply telling us what they lost. With all these recommendations it’s clear that the attacker still needs to be sophisticated to pull off more attacks with the SecurID data, and needs to have that data, which means customer risk is unlikely to increase if they reveal more.

This isn’t like a 0-day vulnerability, where merely knowing it’s out there is a path to exploitation. More information now will only reduce customer risk.

What you need to do

Here are our updated recommendations:

Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:

  1. Don’t panic. Although we don’t know a lot more, we have a strong sense of the attacker and the vulnerability. Most of you aren’t at risk if you follow RSA’s recommendations. Many of you aren’t on the target list at all.
  2. Talk to your RSA representative and pressure them for increased disclosure.
  3. Read the RSA SecureCare documentation. Among other things, it provides the specific things to look for in your logs.
  4. Let your users with SecurIDs know something is up and not to reveal any information about their tokens.
  5. Assume SecureID is no longer effective. Review passwords/PINs tied to SecurID accounts and make sure they are strong (if possible). If you change settings to use long PINs, you need to get an update script from RSA (depending on your product version) so the update pushes out properly.
  6. If you are a high-value target, force a password change for any accounts with privileges that could be seriously damaging (e.g., admins).
  7. Consider disabling accounts that don’t use a password or PIN.
  8. Set authentication attempt lockouts (3 tries to lock an account, or similar).

The biggest changes are a little more detail on what to look for, which supports our previous assumptions. That and my belief their use of the term APT is accurate.

Open questions

I will add in my own answers where we have them:

  1. While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details? Not answered, but reading between the lines this looks like true APT.
  2. How is SecurID affected and will you be making mitigations public? Partially answered. More specific mitigations are now published, but we still don’t have full information.
  3. Are all customers affected or only certain product versions and/or configurations? Answered – see the SecureCare documentation, but it seems to be all current versions.
  4. What is the potential vector of attack? Unknown, so we are still assuming it’s lost token records/seeds, which means the attacker needs to gather other information to successfully make an improper authentication request.
  5. Will you, after any investigation is complete, release details so the rest of us can learn from your victimization? Answered. An RSA contact told me they have every intention on getting as detailed as possible once this is all over.

And one remaining question:

  1. Will RSA need to reissue tokens, and if so what is the timing and process?

Hopefully this all helps. For most of you there isn’t a whole lot to do at this point, but some of you most definitely need to make changes and keep your eyes open.


Friday, March 18, 2011

How Enterprises Can Respond to the RSA/SecurID Breach

By Rich

We have gotten a bunch of questions about what people should do, so I thought I would expand more on the advice in our last post, linked below.

Since we don’t know for sure who compromised RSA, nor exactly what was taken, nor how it could be used, we can’t make an informed risk decision. If you are in a high-security/highly-targeted industry you probably need to make changes right away. If not, some basic precautions are your best bet.

Remember that SecurID is the second factor in a two-factor system… you aren’t stripped naked (unless you’re going through airport security). Assuming it’s completely useless now, here is what you can do:

  1. Don’t panic. We know almost nothing at this point, and thus all we can do is speculate. Until we know the attacker, what was lost, how SecurID was compromised (assuming it was), and the potential attack vector we can’t make an informed risk assessment.
  2. Talk to your RSA representative and pressure them for this information.
  3. Assume SecureID is no longer effective. Review passwords tied to SecurID accounts and make sure they are strong (if possible).
  4. If you are a high-value target, force a password change for any accounts with privileges that could be overly harmful (e.g., admins).
  5. Consider disabling accounts that don’t use a password or PIN.
  6. Set password attempt lockouts (3 tries to lock an account, or similar).

I hope we’re wrong, but that’s the safe bet until we hear more. And remember, it isn’t like Skynet is out there compromising every SecurID-‘protected’ account in the world.


Thursday, March 17, 2011

**Updated** RSA Breached: SecurID Affected

By Rich

You will see this all over the headlines during the next days, weeks, and maybe even months. RSA, the security division of EMC, announced they were breached and suffered data loss.

Before the hype gets out of hand, here’s what we know, what we don’t, what you need to do, and some questions we hope are answered:

What we know

According to the announcement, RSA was breached in an APT attack (we don’t know if they mean China, but that’s well within the realm of possibility) and material related to the SecureID product was stolen.

The exact risk to customers isn’t clear, but there does appear to be some risk that the assurance of your two factor authentication has been reduced.

RSA states they are communicating directly with customers with hardening advice. We suspect those details are likely to leak or become public, considering how many people use SecurID. I can also pretty much guarantee the US government is involved at this point.

Our investigation has led us to believe that the attack is in the category of an Advanced Persistent Threat (APT). Our investigation also revealed that the attack resulted in certain information being extracted from RSA’s systems. Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack. We are very actively communicating this situation to RSA customers and providing immediate steps for them to take to strengthen their SecurID implementations.

What we don’t know

We don’t know the nature of the attack. They specifically referenced APT, which means it’s probably related to custom malware, which could have been infiltrated in a few different ways – a web application attack (SQL injection), email/web phishing, or physical access (e.g., an infected USB device – deliberate or accidental). Everyone will have their favorite pet theory, but right now none of us know cr** about what really happened. Speculation is one of our favorite pastimes, but largely meaningless other than as entertainment, until details are released (or leak).

We don’t know how SecurID is affected. This is a big deal, and the odds are just about 100% that this will leak… probably soon. For customers this is the most important question.

What you need to do

If you aren’t a SecurID customer… enjoy the speculation.

If you are, make sure you contact your RSA representative and find out if you are at risk, and what you need to do to mitigate that risk. How high a priority this is depends on how big a target you are – the Big Bad APT isn’t interested in all of you.

The letter’s wording might mean the attackers have a means to generate certain valid token values (probably only in certain cases). They would also need to compromise the password associated with that user. I’m speculating here, which is always risky, but that’s what I think we can focus on until we hear otherwise. So reviewing the passwords tied to your SecurID users might be reasonable.

Open questions

  1. While we don’t need all the details, we do need to know something about the attacker to evaluate our risk. Can you (RSA) reveal more details?
  2. How is SecurID affected and will you be making mitigations public?
  3. Are all customers affected or only certain product versions and/or configurations?
  4. What is the potential vector of attack?
  5. Will you, after any investigation is complete, release details so the rest of us can learn from your victimization?

Finally – if you have a token from a bank or other provider, make sure you give them a few days and then ask them for an update.

If we get more information we’ll update this post. And sorry to you RSA folks… this isn’t fun, and I’m not looking forward to the day it’s our turn to disclose.

Update 19:20 PT: RSA let us know they filed an 8-K. The SecureCare document is linked here and the recommendations are a laundry list of security practices… nothing specific to SecurID. This is under active investigation and the government is involved, so they are limited in what they can say at this time. Based on the advice provided, I won’t be surprised if the breach turns out to be email/phishing/malware related.