Login  |  Register  |  Contact
Wednesday, February 04, 2015

Incite 2/4/2015: 30x32

By Mike Rothman

It was a pretty typical day. I was settled into my seat at Starbucks writing something or other. Then I saw the AmEx notification pop up on my phone. $240.45, Ben Sherman, on the card I use for Securosis expenses. Huh? Who’s Ben Sherman? Pretty sure my bookie’s name isn’t Ben. So using my trusty Google fu I saw they are a highbrow mens clothier (nice stuff, BTW). But I didn’t buy anything from that store.

My well-worn, “Crap. My card number got pwned again.” process kicked in. Though I was far ahead of the game this time. I found the support number for Ben Sherman and left a message with the magic words, “blah blah blah fraudulent transaction blah blah,” and amazingly, I got a call back within 10 minutes. They kindly canceled the order (which saved them money) and gave me some details on the transaction.

AmEx on my phone

The merchandise was evidently ordered by a “Scott Rothman,” and it was to be shipped to my address. That’s why the transaction didn’t trigger any fraud alerts – the name was close enough and the billing and shipping addresses were legit. So was I getting punked? Then I asked what was ordered.

She said a pair of jeans and a shirt. For $250? Damn, highbrow indeed. When I inquired about the size that was was the kicker. 30 waist and 32 length on the jeans. 30x32. Now I’ve dropped some weight, but I think the last time I was in size 30 pants was third grade or so. And the shirt was a Small. I think I outgrew small shirts in second grade. Clearly the clothes weren’t for me. The IP address of the order was Cumming, GA – about 10 miles north of where I live, and they provided a bogus email address.

I am still a bit perplexed by the transaction – it’s not like the perpetrator would benefit from the fraud. Unless they were going to swing by my house to pick up the package when it was delivered by UPS. But they’ll never get the chance, thanks to AmEx, whose notification allowed me to cancel the order before it shipped. So I called up AmEx and asked for a replacement card. No problem – my new card will be in my hands by the time you read this.

The kicker was an email I got yesterday morning from AmEx. Turns out they already updated my card number in Apple Pay, even though I didn’t have the new card yet. So I could use my new card on my fancy phone and get a notification when I used it.

And maybe I will even buy some pants from Ben Sherman to celebrate my new card. On second thought, probably not – I’m not really a highbrow type…


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Applied Threat Intelligence

Network Security Gateway Evolution

Security and Privacy on the Encrypted Network

Newly Published Papers

Incite 4 U

  1. It’s about applying the threat intel: This post on the ThreatConnect blog highlights an important aspect that may get lost in the rush to bring shiny threat intelligence data to market. As lots of folks, notably Rick Holland and yours truly, have been saying for a while. It’s not about having the data. It’s about using it. The post points out that data is data. Without understanding how it can be applied to your security program, it’s just bits. That’s why my current series focuses on using threat intel within security monitoring, incident response, and preventative controls. Rick’s written a bunch of stuff making similar points, including this classic about how vendors always try to one-up each other. I’m not saying you need (yet another) ‘platform’ to aggregate threat intel, but you definitely need a strategy to make the best use of data within your key use cases. – MR

  2. Good enough: I enjoyed Gilad Parann-Nissany’s post on 10 Things You Need To Know about HIPAA Compliance in the Cloud as generic guidance for PHI security in the cloud. But his 10th point really hits the mark: HIPAA is not feared at all. The vast majority of HIPAA fines have been for physical disclosure of PHI, not electronic. While a handful of firms go out of their way to ensure their cloud infrastructure is secure (which we applaud), they aren’t doing security because of HIPAA. Few cloud providers go beyond encrypting data stores (whatever that means) and securing public network connections, because that’s good enough to avoid major fines. Sometimes “good enough” is just that. – AL

  3. 20 Questions: Over the years I have been management or, at Gartner, part of a hiring committee at various times. I have not, however, had to really interview for most of my jobs (at least not normal interviews). The most interesting situation was the hiring process at the FBI. That interview was so structured that the agents had to go through special training just to give it. They tested me not only on answering the questions, but answering them in the proper way, as instructed at the beginning, in the proper time window. (I passed, but was cut later either due to budget reductions at the time, or some weirdness in my background. Even though I eliminated all witnesses, I swear!). But I have always struggled a bit a getting technical hires right, especially in security. The best security pros I know have broad knowledge and an ability to assimilate and correlate multiple kinds of information. I really like Richard Bejtlich’s hiring suggestion. Show them a con video, and have them explain the ins and outs and interpret it. That sure beats the programming tests I used when running dev shops because it gives you great insight into their thought process and what they think is important. – RM

  4. Mixed results: IBM is touting a technology called Identity Mixer as a way for users to both conceal sensitive attributes of their identity, and as a secure content delivery mechanism. But this approach is really Digital Rights Management – which essentially means encryption. This approach has been tried many times for both content delivery and user data protection. The issue is that when allowing a third party to decrypt or access any protected data, the data must be decrypted and removed from its protection. If you use this technology to deliver videos or music it is only as secure as the users who access the data. This approach works well enough for DirecTV because they control the hardware and software ecosystem, but falls apart in conventional cases where the user controls the endpoint. Similarly, sharing encrypted data and keys with a third party defeats the point. – AL

  5. Follow the money: I thought about calling this one “Protection racket”, but even the CryptoLocker guys actually unlock your stuff when you pay them, as promised. It turns out the AdBlock Plus folks take money from Microsoft, Google, and Amazon to allow their ads through. The company’s business model is built on whitelisting ‘good’ ads that comply with their policies (which often includes payment to the AdBlock Plus developers). And they do acknowledge this on their site. That change was made around the end of January 2014 (thank you, Internet Archive). I get it, everyone needs to make money, and not all ads are bad. Many good sites rely on them, although that’s a rough business. I would actually stop blocking most ads if they would stop tracking me even when I don’t click on them. But a business model like this is dangerous. A company becomes beholden to financial interests which don’t necessarily align with its users’. That’s one reason I have been so excited by Apple seeing privacy of customer data as a competitive advantage – as much as companies commit to grand ideals (such as “Don’t be evil.”), it sure is easier to stick to them when they help you make piles of money. – RM

  6. Hack your apps (before the other guys do): This has been out there for a while, but it’s disturbing nonetheless. Marriott collected lots of private information about customers, which isn’t a problem. Unless that information is accessible via a porous mobile app – as it was. I know many organizations take their mobile apps seriously, treating them just like other Internet-facing assets in terms of security. It may be a generalization but that last statement cuts both ways. Organizations that take security seriously do so on any customer-facing technology – with security assessments and penetration tests. And those that don’t… probably don’t. Just understand that mobile apps are a different attack vector, and we will see different ways to steal information. So hack your own apps – otherwise an adversary will. – MR

—Mike Rothman

Applied Threat Intelligence: Use Case #3, Preventative Controls

By Mike Rothman

So far, as we have looked to apply threat intelligence to your security processes, we have focused on detection/security monitoring and investigation/incident response functions. Let’s jump backwards in the attack chain to take a look at how threat intelligence can be used in preventative controls within your environment.

By ‘preventative’ we mean any control that is in the flow, and can therefore prevent attacks. These include:

  1. Network Security Devices: These are typically firewalls (including next-generation models), and intrusion prevention systems. But you can also include devices such as web application firewalls, which operate at different levels in the stack but are inline and can thus block attacks.
  2. Content Security Devices/Services: Web and email filters can also function as preventative controls because they inspect traffic as it passes through and can enforce policies/block attacks.
  3. Endpoint Security Technologies: Protecting an endpoint is a broad category, and can include traditional endpoint protection (anti-malware) and new-fangled advanced endpoint protection technologies such as isolation and advanced heuristics. We described the current state of endpoint security in our Advanced Endpoint Protection paper, so check that out for detail on the technologies.

TI + Preventative Controls

Once again we consider how to apply TI through a process map. So we dust off the very complicated Network Security Operations process map from NSO Quant, simplify a bit, and add threat intelligence.

TI + Preventative Controls

Rule Management

The process starts with managing the rules that underlie the preventative controls. This includes attack signatures and the policies & rules that control attack response. The process trigger will probably be a service request (open this port for that customer, etc.), signature update, policy update, or threat intelligence alert (drop traffic from this set of botnet IPs). We will talk more about threat intel sources a bit later.

  1. Policy Review: Given the infinite variety of potential monitoring and blocking policies available on preventative controls, keeping the rules current is critical. Keep the severe performance hit (and false positive implications) of deploying too many policies in mind as you decide what policies to deploy.
  2. Define/Update/Document Rules: This next step involves defining the depth and breadth of the security policies, including the actions (block, alert, log, etc.) to take if an attack is detected – whether via rule violation, signature trigger, threat intelligence, or another method. Initial policy deployment should include a Q/A process to ensure no rules impair critical applications’ ability to communicate either internally or externally.
  3. Write/Acquire New Rules: Locate the signature, acquire it, and validate the integrity of the signature file(s). These days most signatures are downloaded, so this ensures the download completed properly. Perform an initial evaluation of each signature to determine whether it applies within your organization, what type of attack it detects, and whether it is relevant in your environment. This initial prioritization phase determines the nature of each new/updated signature, its relevance and general priority for your organization, and any possible workarounds.

Change Management

In this phase rule additions, changes, updates, and deletions are handled.

  1. Process Change Request: Based on the trigger within the Content Management process, a change to the preventative control(s) is requested. The change’s priority is based on the nature of the rule update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models are anticipated.
  2. Test and Approve: This step includes development of test criteria, performance of any required testing, analysis of results, and release approval of the signature/rule change once it meets your requirements. This is critical if you are looking to automate rules based on threat intelligence, as we will discuss later in the post. Changes may be implemented in log-only mode to observe their impact before committing to blocking mode in production (critical for threat intelligence-based rules). With an understanding of the impact of the change(s), the request is either approved or denied.
  3. Deploy: Prepare the target devices for deployment, deliver the change, and return them to normal operation. Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure no disruption to production systems.
  4. Audit/Validate: Part of the full process of making the change is not only having the Operations team confirm it during the Deploy step, but also having another entity (internal or external, but not part of Ops) audit it to provide separation of duties. This involves validating the change to ensure the policies were properly updated and matching it against a specific request. This closes the loop and ensures there is a documentation trail for every change. Depending on how automated you want this process to be this step may not apply.
  5. Monitor Issues/Tune: The final step of the change management process involves a burn-in period when each rule change is scrutinized for unintended consequences such as unacceptable performance impact, false positives, security exposures, or undesirable application impact. For threat intelligence-based dynamic rules false positives are the issue of most concern. The testing process in the Test and Approve step is intended to minimize these issues, but there are variances between test environments and production networks so we recommend a probationary period for each new or updated rule, just in case.

Automatic Deployment

The promise of applied threat intelligence is to have rules updated dynamically per intelligence gleaned from outside your organization. It adds a bit of credibility to “getting ahead of the threat”. You can never really get ‘ahead’ of the threat, but certainly can prepare before it hits you. But security professionals need to accustom themselves to updating rules from data.

We joke in conference talks about how security folks hate the idea of Skynet tweaking their defenses. There is still substantial resistance to updating access control rules on firewalls or IPS blocking actions without human intervention. But we expect this resistance to ebb as cloud computing continues to gain traction, including in enterprise environments. The only way to manage an environment at cloud speed and scale is with automation. So automation is the reality in pretty much every virtualized environment, and making inroads in non-virtual security as well.

So what can you do to get comfortable with automation? Automate things! No, we aren’t being cheeky. You need to start simple – perhaps by implementing blocking rules based on very bad IP reputation scores. Monitor your environment closely to ensure minimal false positives. Tune your rules if necessary, and then move on to another use case.

Not all situations where automated response make sense are related to threat intelligence. In case of a data breach, lockdown, or zero-day attack (either imminent or in progress), you might want to implement (temporary) blocks or workarounds automatically based on predefined policies. For example if you detect a device or cloud instance acting strangely, you can automatically move it to a quarantine network (or security zone) for investigation. This doesn’t (and shouldn’t) require human intervention, so long as you are comfortable with your trigger criteria.

Useful TI

Now let’s consider collecting external data useful for preventing attacks. This includes the following types of threat intelligence:

  • File Reputation: Reputation can be thought of as just a fancy term for traditional AV, but whatever you call it, malware proliferates via file transmission. Polymorphic malware does make signature matching much harder, but not impossible. The ability to block known-bad files close to the edge of the network is valuable – the closer to the edge, the better.
  • Adversary Networks: Some networks are just no good. They are run by non-reputable hosting companies who provide a safe haven for spammers, bot masters, and other online crime factions. There is no reason your networks should communicate with these kinds of networks. You can use a dynamic list of known bad and/or suspicious addresses to block ingress and egress traffic. As with any intelligence feed, you should monitor effectiveness, because known-bad networks changes every second of every day and keeping current is critical.
  • Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices, especially on endpoints. The shiny new term for an attack signature is Indicator of Compromise. But whatever you call it, an IoC is a handy machine-readable way to identify registry, configuration, or system file changes that indicate what malicious code does to devices – which is why we call it a malware indicator. This kind of detailed telemetry from endpoints and networks enables you to prevent attacks on the way in, and take advantage of others’ misfortune.
  • Command and Control Patterns: One specialized type of adversary network detection is intelligence on Command and Control (C&C) networks. These feeds track global C&C traffic to pinpoint malware originators, botnet controllers, and other IP addresses and sites to watch for as you monitor your environment.
  • Phishing Sites: Current advanced attacks tend to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically find email the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and their tactics, and give you something to block on your web filters and email security services.

Ensuring your job security is usually job #1, so iterate and tune processes aggressively. Do add new data sources and use cases, but not too fast. Make sure you don’t automate a bad process, which causes false positives and system downtime. Slow and steady wins this race.

We will wrap up this series with our next post, on building out your threat intelligence gathering program. With all these cool use cases for leveraging TI to improve your security program, you should make sure it is reliable, actionable, and relevant.

—Mike Rothman

Friday, January 30, 2015

Summary: Heads up

By Rich

Rich here.

Last week I talked about learning to grind it out. Whether it’s a new race distance, or plowing through a paper or code that isn’t really flowing, sometimes you need to just put your head down, set a pace, and keep moving.

And sometimes that’s the absolute worst thing to do.

I have always been a natural sprinter; attracted both to sports and other achievements I could rocket through with sheer speed and power. I was horrible at endurance endeavors (physical and mental) as a kid and into my early 20’s. I mean, not “pretending to be humble horrible” but “never got the Presidential Physical Fitness thing because I couldn’t run a mile worth a crap” horrible.

And procrastinating? Oh my. I had, I shit you not, a note in my file at the University of Colorado not to “cut him any breaks” because I so thoroughly manipulated the system for so long. (8 years of continuous undergrad… you make a few enemies on the way). It was handwritten on a Post-It, right on my official folder.

It was in my mid-20’s that I gained the mental capacity for endurance. Mountain rescue was the biggest motivator because only a small percentage of patients fell near roads. I learned to carry extremely heavy loads over long distances, and then take care of a patient at the end. You can’t rely on endurance – we used to joke that our patients were stable or dead, since it isn’t like we could just scoop them off the road (mostly).

Grinding is essential, but can be incredibly unproductive if you don’t pop your head up every now and then. Like the time we were on a physically grueling rescue, at about 11,000’, at night, in freezing rain, over rough terrain. Those of us hauling the patient out were turning into zombies, but someone realized we were hitting the kind of zone where mistakes are made, people get hurt, and it was time to stop.

Like I said before: “stable or dead”, and this guy was relatively stable. So we stopped, a couple team members bunkered in with him for the night, and we managed to get a military helicopter for him in the morning. (It may have almost crashed, but we won’t talk about that.)

It hadn’t occurred to me to stop; I was too deep in my inner grind, but it was the right decision. Just like the problem I was having with some code last year. It wouldn’t work, no matter what I did, and I kept trying variation after variation. I hit help forums, chat rooms, you name it.

Then I realized it wasn’t me, it was a bug (this time) in the SDK I was using. Only when I tried to solve the problem from an entirely new angle, instead of trying to fix the syntax, did I figure it out. The cloud, especially, is funny that way. Function diverges from documentation (if there is any) much more than you’d think. Just ask Adrian about AWS SNS and undocumented, mandatory, account IDs.

In security we can be particularly prone to grinding it out. Force those logs into the SIEM, update all the vulnerable servers before the auditor comes back, clear all the IDS alerts.

But I think we are at the early edge of a massive transition, where popping our heads up to look for alternatives might be the best approach. ArcSight doesn’t have an AWS CloudTrail connector? Check out a hybrid ELK stack or cloud-native SIEM. Tired of crash patching for the next insert pseudo-cool name here vulnerability? Talk to your developers about autoscaling and continuous deployment.

Every year I try to block out a week, or at least a few half-days, to sit back, focus on research, and see which of my current assumptions and work patterns are wrong or no longer productive. Call it “active resting”. I think I have come up with some cool stuff for this year, both in my work habits and security controls. Now I just need time to play with the code and configurations, to see if any of it actually works.

But unlike my old patients, my code and writing seem to be both unstable and dead, so I won’t get my hopes too high.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Wednesday, January 28, 2015

Incite 1/28/2015: Shedding Your Skin

By Mike Rothman

You are constantly changing. We all are. You live, you learn, you adapt, you change. It seems that if you pay attention, every 7-9 years or so you realize you hardly recognize the person looking back at you from the mirror. Sometimes the changes are very positive. Other times a cycle is not as favorable. That’s part of the experience. Yet many people don’t think anything changes. They expect the same person year after year.

I am a case in point. I have owned my anger issues from growing up and my early adulthood. They resulted in a number of failed jobs and relationships. It wasn’t until I had to face the reality that my kids would grow up in fear of me that I decided to change. It wasn’t easy, but I have been working at it diligently for the past 8 years, and at this point I really don’t get angry very often.

Done with this skin says the snake

But lots of folks still see my grumpy persona, even though I’m not grumpy. For example I was briefing a new company a few weeks ago. We went through their pitch, and I provided some feedback. Some of it was hard for them to hear because their story needed a lot of work. At some point during the discussion, the CEO said, “You’re not so mean.” Uh, what? It turns out the PR handlers had prepared them for some kind of troll under the bridge waiting to chew their heads off.

At one point I probably was that troll. I would say inflammatory things and be disagreeable because I didn’t understand my own anger. Belittling others made me feel better. I was not about helping the other person, I was about my own issues. I convinced myself that being a douche was a better way to get my message across. That approach was definitely more memorable, but not in a positive way. So as I changed my approach to business changed as well. Most folks appreciate the kinder Incite I provide. Others miss crankypants, but that’s probably because they are pretty cranky themselves and they wanted someone to commiserate over their miserable existence.

What’s funny is that when I meet new people, they have no idea about my old curmudgeon persona. So they are very surprised when someone tells a story about me being a prick back in the day. That kind of story is inconsistent with what they see. Some folks would get offended by hearing those stories, but I like them. It just underscores how years of work have yielded results.

Some folks have a hard time letting go of who they thought you were, even as you change. You shed your skin and took a different shape, but all they can see is the old persona. But when you don’t want to wear that persona anymore, those folks tend to move out of your life. They need to go because don’t support your growth. They hold on to the old.

But don’t fret. New people come in. Ones who aren’t bound by who you used to be – who can appreciate who you are now. And those are the kinds of folks you should be spending time with.


Photo credit: “Snake Skin” originally uploaded by James Lee

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Applied Threat Intelligence

Network Security Gateway Evolution

Security and Privacy on the Encrypted Network

Newly Published Papers

Incite 4 U

  1. Click. Click. Boom! I did an interview last week where I said the greatest security risk of the Internet of Things is letting it distract you from all of the other more immediate security risks you face. But the only reason that is even remotely accurate is because I don’t include industrial control systems, multifunction printers, or other more traditional ‘things’ in the IoT. But if you do count everything connected to the Internet, some real problems pop up. Take the fuel gauge vulnerability just released by H D Moore/Rapid 7. Scan the Internet, find hundreds of vulnerable gas stations, all of which could cause real-world kinetic-style problems. The answer always comes back to security basics: know the risk, compartmentalize, update devices, etc. Some manufacturers are responsible, others not so much, and as a security pro it is worth factoring this reality into your risk profile. You know, like, “lightbulb risk: low… tank with tons of explosive liquid: high”. – RM

  2. How fast is a fast enough response? Richard Bejtlich asks a age-old question. How quickly should incidents be responded to? When he ran a response team the mandate was detection and mitigation in less than an hour. And this was a huge company, staffed to meet that service level. They had processes and tools to provide that kind of response. The fact is you want to be able to respond as quickly as you are staffed. If you have 2 people and a lot of attack surface, it may not be realistic to respond in an hour. If senior management is okay with that, who are you to argue? But that’s not my pet peeve. It’s the folks who think they need to buy real-time alerts when they aren’t staffed to investigate and remediate. If you have a queue of stuff to validate from your security monitors, then getting more alerts faster doesn’t solve any problems. It only exacerbates them. So make sure your tools are aligned with your processes, which are aligned with your staffing level and expertise. Or see your alerts fall on the floor, whether you are a target or not. – MR

  3. Positive reviews: What do you do if you think the software you’re using might have been compromised by hostile third parties? You could review the source code to see if it’s clean. It’s openness that encouraged enterprises to trust non-commercial products, right? But what if it’s a huge commercial distribution, and not open source? If you are talking about Microsoft’s or Apple’s OS code, not only is it extremely tough (like, impossible) to get access, but any effort to review the code would be monstrous and not feasible. In what I believe is unprecedented access, China has gotten the okay to search Apple’s software for back doors to give them confidence that no foreign power has manipulated the code. But this won’t be limited to code – it includes an investigation of build and delivery processes as well, to ensure that substitutions don’t occur along the way. A likely – and very good – outcome for Apple (given the amount of business they do in China), and the resulting decreased pressure from various governments to insert backdoors into the software. – AL

  4. Sec your aaS: One weird part of our business that has cropped up in the past year is working more with SaaS companies who actually care about security. Some big names, many smaller ones, all realizing they are a giant target for every attacker. But I’d have to say these SaaS providers are the minority. Most just don’t have money in the early stages (when it’s most important to build in security) to drop the cash for someone like me to walk in the door. So I enjoyed seeing Bessemer Venture Partners publish a startup security guide. More VCs and funds should provide this kind of support, because their investment goes poof if their companies suffer a major data loss. Or, you know, hire us to do it. – RM

  5. You fix it: It’s shocking that Chip and PIN cards, a technology proven to drastically reduce fraud rates in dozens of other countries, have not been widely adopted in the US. But it’s really sad when the US government beats the banks to market: The US is rolling out Chip and PIN cards for all federal employees this year to promote EMV compliant cards and usage in the US. Chips alleviate card cloning attacks and PINs thwart use of stolen cards. In the EU adoption of Chip and PIN has virtually eliminated card-present fraud. But the people who would benefit the most – banks – don’t bear the costs of deploying and servicing Chip and PIN; issuers and merchants do. So each party acts in its own best interest. Leading by example is great, but if the US government wanted to really promote Chip and PIN, they would help broker (or mandate) a deal among these stakeholders to fix the systemic problem. – AL

  6. Same problem. Different technology… During his day job as a Gartner analyst, Anton gets the same questions over and over again. Both Rich and I know that situation very well. He posted about folks now asking for security analytics, but really wonders whether they just want a SIEM that works. That is actually the wrong question. What customers want are security alerts that help them do their jobs. If their SIEM provided it they wouldn’t be looking at shiny new technologies like big data security analytics and other buzzword-friendly new products. Customers don’t care what you call it, they care about outcomes – which is that they have no idea which alerts matter. But that’s Vendor 101: if the existing technology doesn’t solve the problem, rename the category and sell hope to customers all over again. And the beat goes on. Now back on my anti-cynicism meds. – MR

—Mike Rothman

Tuesday, January 27, 2015

Applied Threat Intelligence: Use Case #2, Incident Response/Management

By Mike Rothman

As we continue with our Applied Threat Intelligence series, let us now look at the next use case: incident response/management. Similar to the way threat intelligence helps with security monitoring, you can use TI to focus investigations on the devices most likely to be impacted, and help to identify adversaries and their tactics to streamline response.


As in our last post, we will revisit the incident response and management process, and then figure out which types of TI data can be most useful and where.

Threat Intelligence and Incident Response

You can get a full description of all the process steps in our full Leveraging TI in Incident Response/Management paper.

Trigger and escalate

The incident management process starts with a trigger kicking off a response, and the basic information you need to figure out what’s going on depends on what triggered the alert. You may get alerts from all over the place, including monitoring systems and the help desk. But not all alerts require a full incident response – much of what you deal with on a day-to-day basis is handled by existing security processes.

Where do you draw the line between a full response and a cursory look? That depends entirely on your organization. Regardless of the criteria you choose, all parties (including management, ops, security, etc.) must be clear on which situations require a full investigation and which do not, before you can decide whether to pull the trigger. Once you escalate an appropriate resource is assigned and triage begins.


Before you do anything, you need to define accountabilities within the team. That means specifying the incident handler and lining up resources based on the expertise needed. Perhaps you need some Windows gurus to isolate a specific vulnerability in XP. Or a Linux jockey to understand how the system configurations were changed. Every response varies a bit, and you want to make sure you have the right team in place.

As you narrow down the scope of data needing analysis, you might filter on the segments attacked or logs of the application in question. You might collect forensics from all endpoints at a certain office, if you believe the incident was contained. Data reduction is necessary to keep the data set to investigate manageable.


You may have an initial idea of who is attacking you, how they are doing it, and their mission based on the alert that triggered the response, but now you need to prove that hypothesis. This is where threat intelligence plays a huge role in accelerating your response. Based on indicators you found you can use a TI service to help identify a potentially responsible party, or more likely a handful of candidates. You don’t need legal attribution, but this information can help you understand the attacker and their tactics.

Then you need to size up and scope out the damage. The goal here is to take the initial information provided and supplement it quickly to determine the extent and scope of the incident. To determine scope dig into the collected data to establish the systems, networks, and data involved. Don’t worry about pinpointing every affected device at this point – your goal is to size the incident and generate ideas for how best to mitigate it. Finally, based on the initial assessment, use your predefined criteria to decide whether a formal investigation is in order. If yes, start thinking about chain of custody and using some kind of case management system to track the evidence.

Quarantine and image

Once you have a handle (however tenuous) on the situation you need to figure out how to contain the damage. This usually involves taking the device offline and starting the investigation. You could move it onto a separate network with access to nothing real, or disconnect it from the network altogether. You could turn the device off. Regardless of what you decide, do not act rashly – you need to make sure things do not get worse, and avoid destroying evidence. Many malware kits (and attackers) will wipe a device if it is powered down or disconnected from the network, so be careful.

Next you take a forensic image of the affected devices. You need to make sure your responders understand how the law works in case of prosecution, especially what provides a basis for reasonable doubt in court.


All this work is really a precursor to the full investigation, when you dig deep into the attack to understand what exactly happened. We like timelines to structure your investigation, as they help you understand what happened and when. Start with the initial attack vector and follow the adversary as they systematically moved to achieve their mission. To ensure a complete cleanup, the investigation must include pinpointing exactly which devices were affected and reviewing exfiltrated data via full packet capture from perimeter networks.

It turns out investigation is more art than science, and you will never actually know everything, so focus on what you do know. At some point a device was compromised. At another subsequent point data was exfiltrated. Systematically fill in gaps to understand what the attacker did and how. Focus on completeness of the investigation – a missed compromised device is sure to mean reinfection somewhere down the line. Then perform a damage assessment to determine (to the degree possible) what was lost.


There are many ways to ensure the attack doesn’t happen again. Some temporary measures include shutting down access to certain devices via specific protocols or locking down traffic in and out of critical servers. Or possibly blocking outbound communication to certain regions based on adversary intelligence. Also consider more ‘permanent’ mitigations, such as putting in place a service or product to block denial of service attacks.

Once you have a list of mitigation activities you marshal operational resources to work through it. We favor remediating affected devices in one fell swoop (big bang), rather than incremental cleaning/reimaging. We have found it more effective to eradicate the adversary from your environment as quickly as possible because a slow cleanup provides opportunity for them to dig deeper.

The mitigation is complete once you have halted the damage and regained the ability to continue operations. Your environment may not be pretty as you finish the mitigation, with a bunch of temporary workarounds to protect information and make sure devices are no longer affected. Make sure to always favor speed over style because time is of the essence.

Clean up

Now take a step back and clean up any disruptions to normal business operations, making sure you are confident that particular attack will never happen again. Incident managers focus on completing the investigation and cleaning out temporary controls, while Operations handles updating software and restoring normal operations. This could mean updating patches on all systems, checking for and cleaning up malware, restoring systems from backup and bringing them back up to date, etc.


Your last step is to analyze the response process itself. What can you identify as opportunities for improvement? Should you change the team or your response technology (tools)? Don’t make the same mistakes again, and be honest with yourselves about what needs to improve.

You cannot completely prevent attacks, so the key is to optimize your response process to detect and manage problems as quickly and efficiently as possible, which brings us full circle back to threat intelligence. You also need to learn about your adversary during this process. You were attacked once and will likely be attacked again. Use threat intelligence to drive the feedback loop and make sure your controls change as often as needed to be ready for adversaries.

Useful TI

Now let’s delve into collecting the external data that will be useful to streamline investigation. This involves gathering threat intelligence, including the following types:

  • Compromised devices: The most actionable intelligence you can get is a clear indication of compromised devices. This provides an excellent place to begin investigation and manage your response. There are many ways you might conclude a device is compromised. The first is clear indicators of command and control traffic coming from the device, such as DNS requests whose frequency and content indicate a domain generating algorithm (DGA) to locate botnet controllers. Monitoring network traffic from the device can also catch files or other sensitive data being transmitted, indicating exfiltration or a remote access trojan.
  • Malware indicators: You can build a lab and perform both static and dynamic analysis of malware samples to identify specific indications of how malware compromises devices. This is a major commitment (as we described in Malware Analysis Quant) – thorough and useful analysis requires significant investment, resources, and expertise. The good news is that numerous commercial services now offer those indicators in formats you can use to easily search through collected security data.
  • Adversary networks: IP reputation data can help you determine the extent of compromise, especially if it is broken up into groups of adversaries. If during your initial investigation you find malware typically associated with Adversary A, you can look for traffic going to networks associated with that adversary. Effective and efficient response requires focus, and knowing which devices may have been compromised in a single attack helps isolate and dig deeper into that attack.

Given the reality of scarce resources on the security team, many organizations select a commercial provider to develop and provide this threat intelligence, or leverage data provided as part of a product or service. Stand-alone threat intelligence is typically packaged as a feed for direct integration into incident response/monitoring platforms. Wrapping it all together produces the process map above. This map encompasses profiling the adversary, collecting intelligence, analyzing threats, and then integrating threat intelligence into incident response.

Action requires automation

The key to making this entire process run is automation. We talk about automation a lot these days, with good reason. Things happen too quickly in technology infrastructure to do much of anything manually, especially in the heat of an investigation. You need to pull threat intelligence in a machine-readable format, and pump it into an analysis platform without human intervention.

But that’s not all. In our next post we will discuss how to use TI within active controls to proactively block attacks. What? That was pretty difficult to write, given our general skepticism about really preventing attacks, but new technology is beginning to make a difference.

—Mike Rothman

Applied Threat Intelligence: Use Case #1, Security Monitoring

By Mike Rothman

As we discussed in Defining TI, threat intelligence can help detect attacks earlier by benefiting from the misfortune of others and looking for attack patterns being used against higher profile targets. This is necessary because you simply cannot prevent everything. No way, no how. So you need to get better and faster at responding. The first step is improving detection to shorten the window between compromise and discovery of compromise.

Before we jump into how – the meat of this series – we need to revisit what your security monitoring process can look like with threat intelligence.


We will put the cart a bit before the horse. We will assume you already collect threat intelligence as described in the last post. But of course you cannot just wake up and find compelling TI. You need to build a process and ecosystem to get there, but we haven’t described them in any detail yet. But we will defer that discussion a little, until you understand the context of the problem to solve, and then the techniques for systematically gathering TI will make more sense.

TISM Process Map

Let’s dig into specific aspects of the process map:

Aggregate Security Data

The steps involved in aggregating security data are fairly straightforward. You need to enumerate devices to monitor in your environment, scope out the kinds of data you will get from them, and define collection policies and correlation rules – all described in gory detail in Network Security Operations Quant. Then you can move on to actively collecting data and storing it in a repository to allow flexible, fast, and efficient analysis and searching.

Security Analytics

The security monitoring process now has two distinct sources to analyze, correlate, and alert on: external threat intelligence and internal security data.

  1. Automate TI integration: Given the volume of TI information and its rate of change, the only way to effectively leverage external TI is to automate data ingestion into the security monitoring platform; you also need to automatically update alerts, reports, and dashboards.
  2. Baseline environment: You don’t really know what kinds of attacks you are looking for yet, so you will want to gather a baseline of ‘normal’ activity within your environment and then look for anomalies, which may indicate compromise and warrant further investigation.
  3. Analyze security data: The analysis process still involves normalizing, correlating, reducing, and tuning the data and rules to generate useful and accurate alerts.
  4. Alert: When a device shows one or more indicators of compromise, an alert triggers.
  5. Prioritize alerts: Prioritize alerts based on the number, frequency, and types of indicators which triggered them; use these priorities to decide which devices to further investigate, and in what order. Integrated threat intelligence can help by providing additional context, allowing responders to prioritize threats so analysts can investigate the highest risks first.
  6. Deep collection: Depending on the priority of the alert you might want to collect more detailed telemetry from the device, and perhaps start capturing network packet data to and from it. This data can facilitate validation and identification of compromise, and facilitate forensic investigation if it comes to that.


Once you have an alert, and have gathered data about the device and attack, you need to determine whether it was actually compromised or the alert was a false positive. If a device has been compromised you need to escalate – either to an operations team for remediation/clean-up, or to an investigation team for more thorough incident response and analysis. To ensure both processes improve constantly you should learn from each validation step: critically evaluate the intelligence, as well as the policy and/or rule that triggered the alert.

For a much deeper discussion of how to Leverage TI in Security Monitoring check out our paper.

Useful TI

We are trying to detect attacks faster in this use case (rather than working on preventing or investigating them), so the most useful types of TI are strong indicators of problems. Let’s review some data sources from our last post, along with how they fit into this use case:

  • Compromised Devices: The most useful kind of TI is a service telling you there is a cesspool of malware on your network. This “smoking gun” can be identified by a number of different indicators, as we will detail below. But if you can get a product to identify those devices wih analytics on TI data, it saves you considerable effort analyzing and identifying suspicious devices yourself.

Of course you cannot always find a smoking gun, so specific TI data types are helpful for detecting attacks:

  • File Reputation: Folks pooh-pooh file reputation, but the fact is that a lot of malware still travels around through the tried and true avenue of file transmission. It is true that polymorphic malware makes it much harder to match signatures, but it’s not impossible; so tracking the presence of files can be helpful for detecting attacks and pinpointing the extent of an outbreak – as we will discuss in detail in our next post.
  • Indicators of Compromise: The shiny new term for an attack signature is indicator of compromise. But whatever you call it an IoC is a handy machine-readable means of identifying registry, configuration, and system file changes that indicate what malicious code does to devices. This kind of detailed telemetry from endpoints and networks enables you to detect attacks as they happen.
  • IP reputation: At this point, given the popularity of spoofing addresses, we cannot recommend making a firm malware/clean judgement based only on IP reputation, but if the device is communicating with known bad addresses and showing other indicators (which can be identified through the wonders of correlation – as a SIEM does) you have more evidence of compromise.
  • C&C Patterns: The last TI data source for this use case is a behavioral analog of IP reputation. You don’t necessarily need to worry about where the device is communicating to – instead you can focus on how it’s communicating. There are known means of polling DNS to find botnet controllers which can be identified and detected by monitoring network traffic (either on the device or at the egress point).

Of course your monitoring tool must understand how to parse these specific patterns and indicators, which is a good segue to integration.


The discussion so far begs one question: how and where should you use TI data? We suggest you start with your existing tool, which is likely a SIEM of some sort (or a shiny cousin: a security monitoring/analytics platform). To understand the leverage points we need to recall what SIEM does in the first place.

Basically, SIEM looks for patterns in security data via correlation and other fancy math techniques. Though a major restriction of SIEM technology is you need to know what to look for in the data, and that involves building complex rules to look for specific attack scenarios/threat models. Many organizations spend considerable time and money figuring out these complex rules, and then spend even more time and money tuning system to produce some semblance of actionable alerts. And if you ask SOC (Security Operations Center) staff, they will be happy to explain (colorfully) that many of the alerts – even after significant tuning – turn out to be false positives.

Availability of the data types described above changes what you should be looking for – at least initially. If you know there are a handful (or a couple handfuls) of attacks prevalent in the wild at any given moment, you should look for those. This approach is far more efficient and effective than looking for every possible attack.

But this only works if you can keep the rules up to date with the latest threat intelligence reflecting the latest attacks. Unless you have some kind of savant who can parse a threat intelligence feed and build SIEM rules instantly, you will need to automate loading and rule building in your monitoring tool. We discussed the emergence of standards like STIX and TAXXI to facilitate the integration of threat feeds into your security monitoring platform in our last post. But even without standards, many TI providers have built custom integrations to feed data into the leading SIEM and security analytics products/services, to extract value from data.

Actionable TI

Why is this approach better than just looking for patterns like privilege escalation or reconnaissance, as we learned in SIEM school? Because TI data represents attacks that are happening right now in other networks. Attacks you won’t see (or know to look for) until it’s too late. In a security monitoring context, leveraging TI enables you to focus your validation/triage efforts, shorten the window between compromise and detection, and ultimately make better use of scarce resources which need to be directed at the most important current risk. That is what we’d call actionable intelligence.

—Mike Rothman

Monday, January 26, 2015

Firestarter: 2015 Trends

By Rich

Rich, Mike, and Adrian each pick a trend they expect to hammer us in 2015. Then we talk about it, probably too much. From threat intel to tokenization to SaaS security.

And oh, we did have to start with a dig on the Pats. Cheating? Super Bowl? Really? Come on now.

Watch or listen:


New Paper: Monitoring the Hybrid Cloud

By Mike Rothman

We are pleased to announce the availability of our Monitoring the Hybrid Cloud: Evolving to the CloudSOC paper. As the megatrends of cloud computing and mobility continue to play out in technology infrastructure, your security monitoring approach must evolve to factor in the lack of both visibility and control over the infrastructure. But senior management isn’t in the excuses business so you still need to provide the same level of diligence in protecting critical data. This paper looks at why the cloud is different, emerging use cases for hybrid cloud security monitoring, and some architectural ideas with migration plans to get there.

CloudSOC Cover

As always we would like to thank IBM Security for licensing this content enabling us to post it on the site for a good price.

Check out the landing page, or you can download the paper directly (PDF).

—Mike Rothman

Sunday, January 25, 2015

Applied Threat Intelligence: Defining TI

By Mike Rothman

As we looked back on our research output for the past 2 years it became clear that threat intelligence (TI) has been a topic of interest. We have written no less than 6 papers on this topic, and feel like we have only scratched the surface of how TI can impact your security program.

So why the wide-ranging interest in TI? Because security practitioners have basically been failing to keep pace with adversaries for the past decade. It’s a sad story, but it is reality. Adversaries can (and do) launch new attacks using new techniques, and the broken negative security model of looking for attacks you have seen before consistently misses them. If your organization hasn’t seen the new attacks and updated your controls and monitors to look for the new patterns, you are out of luck.

What if you could see attacks without actually being attacked? What if you could benefit from the experience of higher-profile targets, learn what adversaries are trying against them, and then look for those patterns in your own environment? That would improve your odds of detecting and preventing attacks. It doesn’t put defenders on an even footing with attackers, but it certainly helps.

So what’s the catch? It’s easy to buy data but hard to make proper use of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s just more useless data, and you already have plenty of that. As we discussed in detail in both Leveraging Threat Intelligence in Security Monitoring and Leveraging Threat Intelligence in Incident Response/Management, TI can only help if your security program evolves to take advantage of intelligence data.

As we wrote in the TI+SM paper:

One of the most compelling uses for threat intelligence is helping to detect attacks earlier. By looking for attack patterns identified via threat intelligence in your security monitoring and analytics processes, you can shorten the window between compromise and detection.

But TI is not just useful for security monitoring and analytics. You can leverage it in almost every aspect of your security program. Our new Applied Threat Intelligence series will briefly revisit how processes need to change (as discussed in those papers) and then focus on how to use threat intelligence to improve your ability to detect, prevent, and investigate attacks. Evolving your processes is great. Impacting your security posture is better. A lot better.

Defining Threat Intelligence

We cannot write about TI without acknowledging that, with a broad enough definition, pretty much any security data qualifies as threat intelligence. New technologies like anti-virus and intrusion detection (yes, that’s sarcasm, folks) have been driven by security research data since they emerged 10-15 years ago. Those DAT files you (still) send all over your network? Yup, that’s TI. The IPS rules and vulnerability scanner updates your products download? That’s all TI too.

Over the past couple years we have seen a number of new kinds of TI sources emerge, including IP reputation, Indicators of Compromise, command and control patterns, etc. There is a lot of data out there, that’s for sure. And that’s great because without this raw material you have nothing but what you see in your own environment.

So let’s throw some stuff against the wall to see what sticks. Here is a starter definition of threat intelligence:

Threat Intelligence is security data that provides the ability to prepare to detect, prevent, or investigate emerging attacks before your organization is attacked.

That definition is intentionally quite broad because we don’t want to exclude interesting security data. Notice the definition doesn’t restrict TI to external data either, although in most cases TI is externally sourced. Organizations with very advanced security programs can do proactive research on potential adversaries and develop proprietary intelligence to identify likely attack vectors and techniques, but most organizations rely on third-party data sources to make internal tools and processes more effective. That’s what leveraging threat intelligence is all about.

Adversary Analysis

So who is most likely to attack? That’s a good start for your threat intelligence process, because the attacks you will see vary greatly based on the attacker’s mission, and their assessment of the easiest and most effective way to compromise your environment.

  • Evaluate the mission: You need to start by learning what’s important in your environment, so you can identify interesting targets. They usually break down into a few discrete categories – including intellectual property, protected customer data, and business operations information.
  • Profile the adversary: To defend yourself you need to know not only what adversaries are likely to look for, but what kinds of tactics various types of attackers typically use. So figure out which categories of attacker you are likely to face. Types include unsophisticated (using widely available tools), organized crime, competitors, and state-sponsored. Each class has a different range of capabilities.
  • Identify likely attack scenarios: Based on the adversary’s probable mission and typical tactics, put your attacker hat on to figure out which path you would most likely take to achieve it. At this point the attack has already taken place (or is still in progress) and you are trying to assess and contain the damage. Hopefully investigating your proposed paths will prove or disprove your hypothesis.

Keep in mind that you don’t need to be exactly right about the scenario. You need to make assumptions about what the attacker has done, and you cannot predict their actions perfectly. The objective is to get a head start on response, narrowing down investigation by focusing on specific devices and attacks. Nor do you need a 200-page dossier on each adversary – instead focus on information needed to understand the attacker and what they are likely to do.

Collecting Data

Next start to gather data which will help you identify/detect the activity of these potential adversaries in your environment. You can get effective threat intelligence from a number of different sources. We divide security monitoring feeds into five high-level categories:

  • Compromised devices: This feed provides external notification that a device is suspiciously communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
  • Malware indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators to search for within your environment, as Malware Analysis Quant described in gory detail.
  • IP reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses. IP reputation has evolved since its introduction, now featuring scores to assess the relative maliciousness of each address. Reputation services may also factor in additional context such as Tor nodes & anonymous proxies, geo-location, and device ID, to further refine reputation.
  • Command and Control networks: One specialized type of reputation assessment which is often packaged as a separate feed is intelligence on Command and Control (C&C) networks. These feeds track global C&C traffic to pinpoint malware originators, botnet controllers, and other IP addresses and sites you should watch for as you monitor your environment.
  • Phishing messages: Current advanced attacks tend to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically find email the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and their tactics.

What Has Changed

As you can see, we have had many of these data sources for years. So why are we talking about TI as a separate function? Because the increasing sophistication of attackers has driven change, which means you need to leverage more and better data to keep pace.

We started seeing more advanced security organizations staffing up their own threat intelligence groups a couple years ago. They are tasked with understanding the organization’s attack surface, and figuring out what’s at risk and most likely to be attacked. These folks basically provide context for which of the countless threats out there actually need to be dealt with; and what needs to be done to prevent, detect, and/or investigate potential attacks. These organizations need data and have been willing to pay for it, which created a new market for security data.

Another large change in the threat intelligence landscape has been the emergence of standards, specifically STIX and TAXXI, which enabled quicker and better integration of TI into security processes. STIX provides a common data format for the interchange of intelligence and TAXXI the mechanism & protocols to send it between originators and consumers of the data. Without these standards organizations needed to do custom integrations with all their active controls and security monitors, which was ponderous and didn’t scale.

So in this case standards have been a very good thing for security.

Addressing the Challenges

You just hit the EZ Button, gather some threat intelligence, and find the attackers in a hot minute, leaving plenty of time for golf. That sounds awesome, right? Okay, maybe it doesn’t work quite like that. Threat intelligence is an emerging capability within security programs. So we (as an industry) need to overcome a few challenges to operationalize this approach:

  • Aggregate the data: Where do you collect the intelligence? You already have systems that can and should automatically integrate intelligence, and use it within rules or an analytics engine. The more automation the better so resources can focus preventing attacks or figuring out what happened.
  • Analyze the data: How do you know what’s important within the massive quantity of data at your disposal? You need to tune your intelligence feeds and refine rules in your controls and monitors over time. As you leverage intelligence in your security program, you get a feel for what works and what isn’t so useful.
  • Actionable data: This takes TI to the next level, with tools automatically updating controls and searching your environment based on threat intelligence feeds. Potentially blocking attacks and/or identify attack indicators before the attacker exfiltrates your data. Existing tools such as firewalls, endpoint security, and SIEM can and should leverage threat intelligence. You will also want your forensics tools to play along, with the ability to leverage external intelligence.
  • False positives/false flags: Unfortunately threat intelligence is still more art than science. See if your provider can prioritize or rank alerts. Then you can use the most urgent intelligence earlier and more extensively. Another aspect of threat intelligence to beware is disinformation. Many adversaries shift tactics, borrowing from other adversaries to confuse you. That is another reason not to simply profile an adversary, but to cross-reference with other information to make sure that adversary makes sense in your context.

Now you have a decent idea what we mean by threat intelligence, so in the rest of this series we will focus on how TI can be used effectively in common use cases. These include security monitoring/alerting, incident response/management, and active security controls (both network and endpoint). So stay tuned – we will put this series on the fast track and post most of the research in short order.

—Mike Rothman

Friday, January 23, 2015

Summary: Grind on

By Rich

Rich here.

Last weekend I ran a local half-marathon. It wasn’t my first, but I managed to cut 11 minutes off my time and set PRs (Personal Record for you couch potatoes) for both the half and a 10K. I didn’t really expect either result, especially since I was out of running for nearly a month due to a random foot injury (although I kept biking).

My times have been improving so much lately that I no longer have a good sense of my race paces. Especially b cause I have only run one 10K in the past 3 years that didn’t involve pushing about 100 lbs of kids in a jog stroller.

This isn’t bragging – I’m still pretty slow compared to ‘real’ runners. I haven’t even run a marathon yet. These improvements are all personal – not to compare myself to others.

I have a weird relationship with running (and swimming/biking). I am most definitely not a natural endurance athlete. I even have the 23andMe genetic testing results to prove it! I’ve been a sprinter my entire life. For you football fans, I could pop off a 4.5 40 in high school (but weighed 135, limiting my career). In lifting, martial arts, and other sports I always had a killer power to weight ratio but lacked endurance for the later rounds.

While I have never been a good distance runner running has always been a part of my life. Mostly to improve my conditioning for other sports, or because it was required for NROTC or other jobs. I have always had running shoes in the closet, have always gone through a pair a year, and have been in the occasional race pretty much forever. I even would keep the occasional running log or subscription to Runners World, but I always considered a marathon beyond my capabilities, and lived with mediocre times and improvements. (I swear I read Runners World for the articles, not the pictures of sports models in tight clothes).

Heck, I have even had a couple triathlon coaches over the years, and made honest attempts to improve. And I’ve raced. Every year, multiple tris, rides, and runs a year.

But then work, life, or travel would interfere. I’d stick to a plan for a bit, get a little better, and even got up to completing a half-marathon without being totally embarrassed. Eventually, always, something would break the habit.

That’s the difference now. I am not getting faster because I’m getting younger. I’m getting faster because I stick to the plan, or change the plan, and just grind it out no matter what. Injured, tired, distracted, whatever… I still work out.

This is the longest continuous (running) training block I have ever managed to sustain. It’s constant, incremental improvement. Sure, I train smart. I mix in the right workouts. Take the right rest days and adjust to move around injuries. But I. Keep. Moving. Forward. And break every PR I’ve ever set, and am now faster than I was in my 20’s for any distance over a mile.

Maybe it’s age. Maybe, despite the Legos and superhero figures on my desk I am achieving s modicum of maturity. Because I use the same philosophy in my work. Learning to program again? Make sure I code nearly every day, even if I’m wiped or don’t have the time. Writing a big paper that isn’t exciting? Write every day; grind it out. Keeping up my security knowledge? Research something new every day; even the boring stuff.

Now my life isn’t full of pain and things I hate. Quite the contrary – I may be happier than I’ve ever been. But part of that is learning to relish the grind. To know that the work pays off, even those times it isn’t as fun. Be it running, writing, or security, it always pays off. And, for some of those big races, it means pushing through real pain knowing the endorphins at the end are totally worth it.

That, and the post-race beers. Hell, even Michelob Ultra isn’t too bad after 13 miles. Runner’s high and all.

Now I need to go run a race with Mike. He’s absolutely insane for taking up running at his age (dude is ancient). Maybe we can go do the Beer Mile together. That’s the one even Lance bailed on after one lap.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike: Firestarter: Full Toddler – The disclosure battles heat up (again) and we wonder when someone is going to change Google’s diaper…

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Wednesday, January 21, 2015

Incite 1/21/2015: Making the Habit

By Mike Rothman

Over halfway through January (already!), how are those New Year’s resolutions going? Did you want to lose some weight? Maybe exercise a bit more? Maybe drink less, or is that just me? Or have some more fun? Whatever you wanted to do, how is that going?

If you are like most the resolutions won’t make it out of January. It’s not for lack of desire, as folks that make resolutions really want to achieve the outcomes. In many cases the effort is there initially. You get up and run or hit the gym. You decline dessert. You sit with the calendar and plan some cool activities.

Good habits are hard to break too...

Then life. That’s right, things are busy and getting busier. You have more to do and less to do it with. The family demands time (as they should) and the deadlines keep piling up. Travel kicks back in and the cycle starts over again. So you sleep through the alarm a few days. Then every day. The chocolate lava cake looks so good, so you have one. You’ll get back on the wagon tomorrow, right?

And then it’s December and you start the cycle over. That doesn’t work very well. So how can you change it? What is the secret to making a habit? There is no secret. Not for me, anyway. It’s about routine. Pure and simple. I need to get into a routine and then the habits just happen.

For instance I started running last summer. So 3 days a week I got up early and ran. No pomp. No circumstance. Just get up and run. Now I get up and freeze my ass off some mornings, but I still run. It’s a habit. Same process was used when I started my meditation practice a few years back. I chose not to make the time during the day because I got mired in work stuff. So I got up early. Like really early. I’m up at 5am to get my meditation done, then I get the kids ready for school, then I run or do yoga. I have gotten a lot done by 8am.

That’s what I do. It has become a routine. And a routine enables you to form a habit. Am I perfect? Of course not, and I don’t fret when I decide to sleep in. Or when I don’t meditate. Or if I’m a bit sore and skip my run. I don’t judge myself. I let it go.

What I don’t do is skip two days. Just as it was very hard to form my habits of both physical and mental practice, it is all too easy to form new less productive habits. Like not running or not meditating. That’s why I don’t miss two days in a row. If I don’t break the routine I don’t break the habit.

And these are habits I don’t want to break.


Photo credit: “Good, Bad Habits” originally uploaded by Celestine Chua

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Network Security Gateway Evolution

Monitoring the Hybrid Cloud: Evolving to the CloudSOC

Security and Privacy on the Encrypted Network

Newly Published Papers

Incite 4 U

  1. Doing attribution right… Marcus kills it in this post on why attribution is hard. You need to have enough evidence, come up with a feasible motive, corroborate the data with other external data, and build a timeline to understand the attack. But the post gets interesting when Marcus discusses how identifying an attacker based upon TTPs might not work very well. Attackers can fairly easily copy another group’s TTPs to blame them. I think attribution (at least an attempt) can be productive, especially as part of adversary analysis. But understand it is likely unreliable; if you make life and death decisions on this data, I don’t expect it to end well. – MR

  2. The crypto wars rise again: Many of you have seen this coming, but in case you haven’t we are hitting the first bump on a rocky road that could dead end in a massive canyon of pain. Encryption has become a cornerstone of information security, used for everything from secure payments to secure communications. The problem is that the same tools used to keep bad guys out also keep the government out. Well, that’s only a problem because politicians seem to gain most of their technical knowledge from watching CSI: Cyber. In the past couple weeks both Prime Minister Cameron in the UK and President Obama have made public statements that law enforcement should have access to encrypted content. The problem is that there is no technically feasible way to provide ‘authorized’ access without leave encryption technology open to compromise. And since citizens in less… open… countries use the same tech this could surrender any pretense of free speech in those areas as well. The next few years will be messy, and could very well have consequences even for average security Joes. There isn’t much we can do, but we sure need to pay attention, especially those of you on the vendor side. I know, not the funnest Incite of the week, but… sigh. – RM

  3. Nobody cares: If my credit card number is stolen I don’t bear the costs of the fraud and I am usually issued a new card within days to replace the old one. Lord knows I need to keep making card purchases, and nothing will stand in the way of commerce! So other than having to update the dozen web sites that require autopay why would I care about my card being stolen? The only answer I can discern is neurosis. Though apparently I am not alone – Brian Krebs’ How Was Your Credit Card Stolen? discusses the most common ways these numbers are harvested. My Boy Scout sense of fair play has prompted me in the past to put in the work to understand the fraud chain – twice – only to face subsequent frustration when neither local law enforcement nor the card brands cared. So, holiday shoppers, checking your credit statements is about all you can do to help. – AL

  4. More CISO perspective: I have been hammering on CISO-level topics for the past few weeks because folks still want to climb the ladder to get the big title (and paycheck). That’s fine, so I’ll keep linking to tips from folks in the field about how to sit in the top security seat. And then I’ll pimp the PragmaticCSO. Gary Hayslip provides some decent perspective on his 5-step process for the CISO job. It starts with “walk about” and then goes through inventory/assessment, planning, and communication. Seems pretty pragmatic to me. I like the specific goal of walking around for a certain amount of time every day. That’s how you keep the pulse of the troops. The requirements of the CISO job are pretty straightforward. Executing on them successfully? That’s a totally different ballgame. – MR

  5. Soft core payments: Google is reportedly looking to buy Softcard, presumably in an effort to kickstart their stalled mobile payment efforts. Google found that “If you build it they will come” only applies to bad Hollywood scripts – anyone can write a mobile ‘digital wallet’ app, but without cooperation from the rest of the ecosystem you won’t get far. The banks, payment processors, and (just as important) mobile carriers all have a stake in mobile payments, and will get their pound of flesh. For years the carriers have been unwilling to allow others to use the embedded “secure element” on phones for payments unless they got a transaction fee, which meant either pay the carrier tax or go home. Details are slim but Softcard is a carrier-owned business so apparently Google would get a carrier-approved interface to devices and the business relationships needed to make their payment app relevant again. – AL

  6. Bait bike: I’m a cyclist. Bicycle theft is a pretty big business, especially in cities and college towns. In the past few years some police departments have started planting GPS-enabled bait bikes in areas to catch the bad guys. They have done the same thing with cars, but it’s probably easier to plant a bike. That’s why I’m amused by the hackers for hire site. Need someone to break into your ex’s Facebook account? Steal that customer list? Just come on down to Billy Bob’s Trusted Hackers! Send us what’s left of your Bitcoin and we’ll hook you up with the most professional script kiddie in our network! Look, this probably isn’t a bait site, but now that it’s in the New York Times, what are the odds the FBI or Interpol isn’t already scanning the database, tracking clients, and prepping cases? We all know how this story is going to end: with jail time. – MR

—Mike Rothman

Monday, January 19, 2015

New Paper: Security Best Practices for Amazon Web Services

By Rich

I could probably write a book on AWS security at this point, except I don’t have the time, and most of you don’t have time to read it. So I wrote a concise paper on the key essentials to get you started – including the top four things to do in the first five minutes with a new AWS account.

Here is an excerpt:

Amazon Web Services is one of the most secure public cloud platforms available, with deep datacenter security and many user-accessible security features. Building your own secure services on AWS requires properly using what AWS offers, and adding additional controls to fill the gaps.

Never forget that you are still responsible for everything you deploy on top of AWS, and for properly configuring AWS security features. AWS is fundamentally different from a virtual datacenter (private cloud), and understanding these differences is key for effective cloud security. This paper covers the foundational best practices to get you started and help focus your efforts, but these are just the beginning of a comprehensive cloud security strategy.

The paper has a [permanent home]((https://securosis.com/research/publication/security-best-practices-for-amazon-web-services).

Or you can directly download the PDF.

I would especially like to thank AlienVault for licensing this paper. Remember companies that license our content don’t get to influence or steer it (outside of submitting comments like anyone else), but their support means we get to release it all for free.


Firestarter: Full Toddler

By Rich

Full Toddler

Yes, people, the disclosure debate is still alive and kicking. But now it is basically a pissing match between two of the largest tech companies. With Google setting rigid deadlines, and Microsoft stuck on their rigid schedule, who will win? Grab the popcorn as we talk about egos, internal inconsistencies, and why putting the user first is so damn hard.

Watch or listen below:


Friday, January 16, 2015

Summary: No Surprises

By Rich

Rich here,

First a quick note. I will be giving a webcast on managing SaaS security later this month. I am about to start writing more on the Cloud Security Gateway market and new techniques for dealing with SaaS.

I planned to write something irreverent in this week’s Summary (like my favorite films), but it has been an odd week in the security world. I expect the consequences to play out over the next decade. I should probably write this up as a dedicated post, but my thoughts are shifting around so much that I am not sure my ideas are ready to stand on their own.

Before I go into this, please keep in mind that the security ‘world’ is a collection of different groups. Tribes might be a better word. But across all subgroups we tend to be skeptical and critical. That is quite healthy, considering what we do, but can easily turn negative and self-defeating.

This is especially true when we engage with society at large. We are, on the whole, the pain-in-the-ass cousin who shows up at the holidays and delights in challenging and debating the rest of the family long past the point where anyone else cares. Yeah, we get it, you caught me in a logical fallacy because I like my new TV but bitched at you for not recycling your beer cans. You win. Now pass the stuffing and STFU.

Also factor in our inherent bias against anyone who does things others don’t understand. (Hat tip to Rob Graham for first introducing me to this concept). We have a long lineage that looks something like heretic > witch > egghead > nerd > geek > hacker. No, not everyone reading this is a hacker, but society at large cannot really differentiate between specific levels of technical wizardry. This is especially true for those of you who play with offensive security, no matter how positive your contributions.

Back to the main story, which is shorter than all this preamble. This week the White House proposed some updates to our computer security laws. Some good, some bad. The Twitter security echo chamber exploded a bit, with much hand-wringing over how this could lead to bad legal consequences – not only for anyone working legitimately in offensive security; it could also create all sorts of additional legal complexities with chilling effects.

There are actually a bunch of proposals circulating, which would affect not only cybersecurity but general Internet usage. From the UK wanting to ban encryption, to mandating DNSSEC, to the FBI wanting to ban effective encryption, to… well, everyone wanting to ban encryption, file sharing, and… stuff.

Many in the security world seem to feel we should have some say over these laws and policies. But we have mostly seen vendors lobby to have their products mandated (and then shrug when people using them get hacked), professional groups pushing to have their training or certifications mandated, and the occasional researcher treated like a dancing monkey for the cameras. And political leaders probably don’t see much distinction between any of these and the big Internet protests that their Hollywood funders all tell them are just criminals who want to watch movies free.

We have mostly done this to ourselves. We are fiercely independent, so it isn’t like we speak with a single voice. We can’t even decide what constitutes a “security professional”. Then we keep shooting ourselves in the foot by demanding evidence from law enforcement and intelligence agencies on things like the Sony hack. And, er, telling the FBI they are wrong rarely works out well.

I am not telling anyone not to do or say what they want. Just keep in mind how the world views you (as witches), and how much technology just scares people, no matter how much they love their iPhones. And if you want to affect politics you need to play politics. Twitter ain’t gonna cut it.

Seriously, no one likes that smarty-pants cousin (or in-law, in my case). And if any lobbyists are reading this, please fix the Kinderegg ban first, then get started on defending encryption.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike Rothman: Your Risk Isn’t My Risk. It is always important to consider likelihood when looking at new attacks. Rich puts the latest in context.
  • Rich: Incite 1/14/2015: Facing the Fear. Because that was my only other choice. I mean, it’s still a good post, but it isn’t like I had an option.

Other Securosis Posts

And now you see why I had to pick Mike’s post.

Favorite Outside Posts

  • Adrian Lane: The importance of deleting old stuff. Honestly, it’s not as valuable as you think, and it is likely to cause harm in the long run.
  • Mike Rothman: The Stunning Scale of AWS. I remember Rich mentioning some of these stats after he got back from the AWS conference in 2013. It is shocking to see this documented, and to understand that when trying to really scale something… commercial products just won’t cut it. Really interesting.
  • Rich: Encryption is Not the Enemy. Dennis lays it out nicely, not that I expect the latest round of crypto wars to end any time soon.

Research Reports and Presentations

Top News and Posts


Wednesday, January 14, 2015

Incite 1/14/2015: Facing the Fear

By Mike Rothman

Some folks just naturally push outside their comfort zones as a matter of course. I am one of them. Others only do things that are comfortable, which is fine if it works for them. I believe that while you are basically born with a certain risk tolerance, you can be taught to get comfortable with pushing past your comfort zone.

For example, kids who are generally shy will remain more comfortable holding up the wall at a social event, but can learn to approach people and get into the mix. It’s tough at first but you figure it out. There is always resistance the first few times you push a child beyond what they are comfortable with, and force them to try something they don’t think they can do. But I believe it needs to happen. It comes back to my general philosophy that limitations exist only in our minds, and you can move past those limitations once you learn to face your fear.

Faces of Fear

The twins’ elementary school does a drama production every year. XX1 was involved when she was that age, and XX2 was one of the featured performers last year. We knew that she’d be right there auditioning for the big role, and she’d likely get one of them (as she did). But with the Boy we weren’t sure. He did the hip hop performance class at camp so he’ll perform, but that’s a bit different than standing up and performing in front of your friends and classmates. Though last year he did comment on how many of his friends were in the show, and he liked that.

We were pleased when he said he wanted to try out. The Boss helped him put together both a monologue and a song to sing for the audition. He knew all the words, but when it came time to practice he froze up. He didn’t want to do it. He wanted to quit. That was no bueno in my book. He needed to try. If he didn’t get a part, so be it. But he wasn’t going to back out because he was scared. He needed to push through that fear. It’s okay to not get the outcome you hope for, but not to quit.

So we pushed him. There were lots of tears. And we pushed some more. A bit of feet stomping at that point. So we pushed again. He finally agreed to practice for us and then to audition after we wore him out. Sure, that was a little heavy-handed, but I’m okay with it because we decided he needed to at least try.

The end result? Yes, he got a part. I’m not sure how much he likes the process of getting ready for the show. We’ll see once he gets up on stage and performs for everyone whether it’s something he will want to do again. But whether he does it again doesn’t matter. He can always say he tried, even when he didn’t want to. That he didn’t let fear stop him from doing something. And that’s the most important lesson of all.


Photo credit: “Faces of fear!” originally uploaded by John Seb Barber

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Security Best Practices for Amazon Web Services

Network Security Gateway Evolution

Monitoring the Hybrid Cloud: Evolving to the CloudSOC

Security and Privacy on the Encrypted Network

Newly Published Papers

Incite 4 U

  1. Full discraposure: Google discovers a bug in a Microsoft product. Google has a strict 90-day policy to disclose, no matter what. Microsoft says, “Hey, we have a fix ready to go on Patch Tuesday, can we get a few extra days?” but Google releases anyway. I’m sorry, but who does that help? Space Rogue summed it up best; he has a long history in the disclosure debate. In his words, “The entire process has gotten out of hand. The number one goal here should be getting stuff fixed because getting stuff fixed helps protect the user, it helps defeat the bad guys and it helps make the world a better place.” Another great quote is: “And so the disclosure debate continues unabated for over a hundred years. With two of the giants in our industry acting like spoiled children we as security professionals must take the reigns from our supposed leaders and set a better example.” Marry me, Space Rogue. Marry me. – RM

  2. The impact of Sony in 2015? FUD! Okay, I am being a little facetious by saying the Sony breach will enable the security industrial complex to launch a new wave of Fear, Uncertainty, and Doubt at organizations in 2015. But it already has folks using tried and true tactics in an attempt to create urgency for whatever widget they are selling today. Ben Rothke is a little more constructive in his analysis for CSO. He makes some good points about the reality that improving security requires ongoing investment and that shiny security products/services are not a complete answer. The one I like best is “a good CISO is important; great security architects are critical.” Amen to that. We believe that as security increasingly gets embedded within the cloud and continuous deployment environments, the security architect will emerge as one of the most valued members of the team. So study up on your architecture, kids! – MR

  3. Making the effort: Gunnar has another really good post, challenging folks to think differently about security. It’s very popular to accept defeat because the odds are stacked against defenders. To mail it in because you will be pwned anyway. And that much is true. You can make progress, but only if you make the effort to improve. Always quick with good analogies, GP refers to how smog was reduced in Los Angeles by 98% over the past 50 years, which most thought was impossible 60 years ago. And how the Scandinavian countries don’t have airplane delays because of snow. They just don’t because they made the effort to figure out how to optimize their processes. I guess another way to put it is a quote I use frequently: “I’m not in the excuses business.” And neither is your senior management, so as Gunnar says: “There is a lot to do, can’t get started any sooner than right now. No such thing as bad winter weather, only opportunities to improve bad snow removal equipment, dysfunctional teams and processes.” Truth. – MR

  4. Free, as in crapware: I seem to have a ‘crap’ theme for my submissions this week. A couple of writers over at HowToGeek decided to go to CNET’s Downloads.com [no link, for obvious reasons and obviousness] to see what happens if they download and install the top 10 apps listed. Hilarity ensues. Spyware, ads, browser hijackers, and more… all from a site that claims its downloads are safe. I frequently see links to these sorts of sites when I search for an application. Sometimes search engines show these contaminated links before the software developer’s site. This is especially common when I look for anything more obscure or no longer maintained. I never download from those sites and I’m on a Mac, but this highlights the ridiculous dangers facing normal Windows users (including your employees). Needless to say, this is why I’m a fan of app stores for PCs, even the open ones (where stuff can still sneak through). I suspect Microsoft will need to move in that direction for the same reasons Apple did, and kill the economic model of bundling and installing backdoors. As long as I always still have the option to go outside the store, I am down with it. – RM

  5. You want a seat, Mr./Ms. CISO? Good luck. I wanted to dig into the archives a bit to mention research that confirms what many of you already know. CISOs are not considered players at the big table. ThreatTrack commissioned a study last summer and came away with some disturbing numbers. 74% of respondents said CISOs should not be part of the organization’s leadership team. 54% don’t think CISOs should be responsible for security purchasing. 28% say the CISO’s decisions negatively impacted financial health. Holy crap! It’s time for a reality check. This is clearly a failure to communicate with folks in senior management. And it needs to be fixed ASAP. It is not like we are going to see fewer attacks or breaches, so if these folks don’t understand what you do and why, that needs to be job #1. Or polishing up your resume will be job #2. – MR

—Mike Rothman