Login  |  Register  |  Contact
Wednesday, December 03, 2014

Incite 12/3/2014: Winding Down

By Mike Rothman

As I sit in yet another hotel, banging out yet another Incite, overlooking yet another city that isn’t home, this is a good time to look back on 2014 because this is my last scheduled trip for this year. It has been an interesting year. At this point the highs this year feel higher, and the lows lower. There were periods when I felt sick from the whiplash of ups and downs. That’s how life is sometimes. Of course my mindfulness practice helps me handle the turbulence with grace, and likely without much external indication of the inner gyrations.

But in 5 years how will I look back on 2014? I have no idea. I have tried not to worry about things like the far future. At that point, XX1 will be leaving for college, the twins will be driving, and I’ll probably have the same amount of gray hair. Sure, I will plan. But I won’t worry. I have been around long enough to know that my plans aren’t worth firing the synapses to devise them. In fact I don’t even write ‘plans’ down any more.

Start me up...

It is now December, when most of us start to wind down the year, turning our attention to the next. We are no different at Securosis. For the next couple weeks we will push to close out projects that have to get done in 2014 and start working with folks on Q1 activities. Maybe we will even get to take some time off over the holidays. Of course vacation has a rather different meaning when you work for yourself and really enjoy what you do. But I will slow down a bit.

My plan is to push through my handful of due writing projects over the next 2 weeks or so. I will continue to work through my strategy engagements. Then I will really start thinking about what 2015 looks like. Though I admit the slightly slower pace has given me opportunity to be thankful for everything. Certainly those higher highs, but also the lower lows. It’s all part of the experience I can let make me crazy, or I can accept bumps as part of the process.

I guess all we can do each year is try to grow from every experience and learn from the stuff that doesn’t go well. For better and worse, I learned a lot this year. So I am happy as I write this although I know happiness is fleeting – so I’ll enjoy the feeling while I can. And then I will get back to living in the moment – there really isn’t anything else.

–Mike

Photo credit: “wind-up dog” originally uploaded by istolethetv


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Network Security Gateway Evolution

Monitoring the Hybrid Cloud: Evolving to the CloudSOC

Security and Privacy on the Encrypted Network

Newly Published Papers


Incite 4 U

  1. CISO in the clink… I love this headline: Can a CISO serve jail time? Duh, of course they can. If they deal meth out of the data center, they can certainly go to jail. Oh, can they be held accountable for breaches and negligence within their organization? Predictably, the answer is: it depends. If you are clearly negligent then all bets are off. But if you act in the best interests of the organization as you see them … it is hard to see how a CISO could be successfully prosecuted. That said, there is a chance, so you need to consult a lawyer before taking the job to understand where your liability begins and ends (based on your agreement), and then you can make an informed decision on whether to take the job. Or at least build some additional protection into your agreement. – MR

  2. Productivity Killer: Sometimes we need a reminder that security isn’t all about data breaches and DDoS. Sometimes something far far worse happens. Just ask Sony Pictures. Last week employees showed up to work to find their entire infrastructure compromised and offline. Yep, down to some black hat hax0rs graphic taking over everyone’s computer screens, just like in… er… the movies. I don’t find any humor in this. Despite what Sony is doing to the Spider-Man franchise, they are just a company with people trying to get their jobs done, make a little scratch, and build products people will pay for. This isn’t as Earth-shattering as the completely destructive Saudi Aramco hack, but it seems pretty close. Destructive hacks and data breaches are not the same things, even though breaches and APTs get all the attention and need to be covered in the threat model. – RM

  3. Friends make the CISO: Far too many CISOs end up in the seat without proper training in what their real job is: coercion and persuasion. Not in a bad way, but the fact is that if a CISO cannot convince their peers to think about security, they cannot succeed. So I enjoyed a piece on securityintelligence.com for describing the CISO’s best friends. The reality is that the CISO job isn’t a technical one – it is a people management job, and far too many folks go into it without understanding. That doesn’t end well. – MR

  4. Understated: I have been reading Adam Shostack’s stuff since I started in security. He is known for offering well-reasoned opinion, devoid of hype and hyperbole, based on decades of hands-on experience. But sometimes that understated style shorts a couple very important points, as in his recent post Threat Modeling at a Startup. Adam focused on the operational aspects, but did not address two important aspects – essentially why threat modeling is so important for startups. First because threat modeling has a pronounced impact at earlier stages of platform development, while the foundation of an application is being designed and built. Second, threat modeling is one of the most cost-effective ways to improve security. Both these facets are critical for startups, who need to get security right out of the blocks, and don’t have a lot of money to burn. – AL

  5. You know the breach is bad when… You need to do a media blitz about hiring a well-known forensic shop to clean up the mess. Yup, the Sony Pictures folks had their damage control people make a big deal about hiring FireEye’s Mandiant group to clean up the mess of their breach. As Rich described above, the breach was pretty bad, but having to make a big deal about hiring forensic folks doesn’t instill confidence that anyone in-house knows what they are doing. But I guess that’s self-evident from two very high-profile breaches one after another. And to the executive who gave the green light to The Interview, it’s all good. Fortunately the North Koreans aren’t vindictive or anything… –MR

—Mike Rothman

Monday, December 01, 2014

Monitoring the Hybrid Cloud: Technical Considerations

By Adrian Lane

New platforms for hybrid cloud monitoring bring both new capabilities and new challenges. We have already discussed some differences between monitoring the different cloud models, and some of the different deployment options available. This post will dive into some technical considerations for these new hybrid platforms, highlighting potential benefits and issues for data security, privacy, scalability, security analytics, and data governance.

As cool as a ‘CloudSOC’ sounds, there are technical nuances which need to be factored into your decision and selection processes. There are also data privacy issues because some types of information fall under compliance and jurisdictional regimes. Cloud computing and service providers can provide an opportunity to control infrastructure costs more effectively, but service models costs are calculated differently that on-premise systems, so you need to understand the computing and storage characteristics of the SOC platform in detail to understand where you are spending money.

Let’s jump into some key areas where you need to focus.

Data Security

As soon as event data is moved out of one ‘cloud’ such as say Salesforce into another, you need to consider the sensitivity of the data, which forces a decision on how to handle security. Using SSL or similar technology to secure the data in motion is the easy part – what to do with the data at rest, once it reaches the CloudSOC, is far more challenging.

You can get some hints from folks who have already grappled with this question: security monitoring providers. These services either build their own private clouds to accommodate and protect client data, or leverage yet another IaaS or PaaS cloud to provide the infrastructure to store the data. Many of you will find the financial and scalability advantages of storing cloud data in a cloud services more attractive than moving all that collected data back to an on-premise system.

Regardless of whether you build your own CloudSOC or use a managed service, a key part of your security strategy will be the Service Level Agreements (SLAs) you establish with your providers. These agreements specify the security controls implemented by the provider, and if something is not specified in that agreement the provider has no obligation to provide it. An SLA is a good place to start, but be wary of unspecified areas – those are where gaps are most likely emerge.

A good place to start is a comparison of what the provider does with what you do internally today. We recommend you ask questions and get clear answers on every topic you don’t understand because once you execute the agreement you have no further leverage to negotiate. And if you are running your own make sure you carefully plan out your cloud security model to take advantage of what your IaaS provider offers. You may decide some data is too sensitive to be stored in the cloud without obfuscation (encryption) or removal (typically redaction, tokenization, or masking).

Data Privacy and Jurisdiction

Over and above basic data security for logs and event data, some countries have strict laws about how Personally Identifiable Information (PII) data may be collected and stored, and some even require that PII not leave its country of origin – even encrypted. If you do business in these countries your team likely already understands the regulations today, but for a hybrid SOC deployment you also need to understand the locations of your primary and backup cloud data centers, and their regional laws as well. This can be incredibly confusing – particularly when data protection laws conflict between countries.

Once you understand the requirements and where your cloud (including CloudSOC) providers are located, you can effectively determine which security controls you need. Once again data encryption addresses many legal requirements, and data masking and tokenization services can remove sensitive data without breaking your applications or impairing security analytics. The key is to know where the data will be stored to figure out the right mix of controls.

Automation and Scalability

If you have ever used Dropbox or Salesforce or Google Docs, you know how easy it is to store data in the cloud. When you move beyond SaaS to PaaS and IaaS, you will find it is just as easy to spin up whole clusters of new applications and servers with a few clicks. Security monitoring, deploying collectors, and setting up proxies for traffic filtering, all likewise benefit from the cloud’s ease of use and agility. You can automate the deployment of collectors, agents, or other services; or agents can be embedded in the start-up process for new instances or technology stacks. Verification and discovery of services running in your cloud can be performed with a single API call. Automation is a hallmark of the cloud so you can script pretty much anything you need.

But getting started with basic collection is a long way from getting a CloudSOC into production. As you move to a production environment you will be constructing and refining initialization and configuration scripts to launch services, and defining templates which dictate when collectors or analytics instances are spun up or shut down via the magic of autoscaling. You will be writing custom code to call cloud APIs to collect events, and writing event filters if the API does not offer suitable options. It is basically back to the future, hearkening back to the early days of SIEM when you spent as much time writing and tuning collectors as analyzing data.

Archiving is also something you ll need to define and implement. The cloud offers very granular control of which data gets moved from short-term to long-term storage, and when. In the long run cloud models offer huge benefits for automation and on-demand scalability, but there are short-term set-up and tuning costs to get a CloudSOC working the way you need. A managed CloudSOC service will do much of this for you, at additional cost.

Other Considerations

  • Management Plane: The management plane for cloud services is a double-edged sword; IT admins now have the power to automate all services, using simple commands, from a single dashboard. It also means you are completely screwed if an attacker gains access to the console. The power provided by the management console and management APIs require greater attention and diligence to ensure proper authentication and authorization than on-premise systems. That means focusing far more attention on administrative rights, administrative entitlements, and monitoring from where from and by whom the console is accessed – via both web console and API calls. Finally, we recommend a heavy dose of threat modeling to ensure your policies are tuned to how an attacker could misuse cloud resources to access your cloud management environment.

  • Analytics: To better detect malware and application misuse, companies are looking to leverage cloud-based big data computing environments for security analytics. This is a highly specialized field, with tools and techniques evolving rapidly. Cloud infrastructure providers offer sophisticated analytical tools and infrastructure as part of their core offerings, helping accelerate development of these capabilities for security. The issue is generally not lack of infrastructure, but more likely a personnel skills gap. Building and running security analytics require both facility with “big data” architecture and security analytics knowhow (practitioners are typically called “data scientists”) to mine through it. Regardless of how little it costs to create a data warehouse today, only a handful of companies actually employ people capable of standing up a sophisticated security analytics environment. So we see more companies engaging third party services – sometimes even in parallel with their own efforts – to perform additional analysis and triage security events across cloud and on-premise systems.

  • Pricing Model: To build your own CloudSOC you need to reconsider the economic model for cloud services when you plan for how to move, store, and process data. Data travels through your data center for free. You ran the cable and installed the switches; those sunk costs enable you to use all your available bandwidth without additional costs. PaaS and IaaS providers offer different pricing tiers for different types of network connectivity. Some offer free network services for ‘local’ traffic (within an availability zone), but charge for data movement between data centers. If you leverage cloud messaging services for added reliability and event processing in your SOC, you will pay a tiny fraction of a penny per message. But even that low per message-price adds up across the millions or billions of events you capture and process every year. Different data storage tiers are available, where performance and reliability rise along with costs. The good news is that cloud providers offer much better metrics and fairly clear pricing on available services. The bad news is that you need to figure out how much of some abstract service you have never used is needed so you can model the costs of your cloud environment. Cloud economic models are fundamentally different, but your need to model costs is the same.

Our next post will detail how to get from A to B: The Migration Process to Monitoring the Hybrid Cloud. We will list the distinct phases of collecting and aggregating cloud data, compare partial against full migration to a CloudSOC, and offer specific advice on what to do and what to avoid.

—Adrian Lane

Tuesday, November 25, 2014

Monitoring the Hybrid Cloud: Solution Architectures

By Adrian Lane

The good old days: Monitoring employees on company-owned PCs, accessing the company data center across corporate networks. You knew where everything was, and who was using it. And the company owned it all, so you could pretty much dictate where and how you performed security monitoring. With cloud and mobile? Not so much.

To take advantage of cloud computing you will need to embrace new approaches to collecting event data if you hope to continue security monitoring. The sources, and the information they contain, are different. Equally important – although initially more subtle – is how to deploy monitoring services. Deployment architectures are critical to deploying and scaling any Security Operations Center; defining how you manage security monitoring infrastructure and what event data you can capture. Furthermore, how you deploy the SOC platform impacts performance and data management. There are a variety of different architectures, intended to meet the use cases outlined in our last post. So now we can focus on alternative ways to deploy collectors in the cloud, and the possibility of using a cloud security gateway as a monitoring point. Then we will take a look at the basic cloud deployment models for a SOC architected to monitor the hybrid cloud, focusing on how to manage pools of event data coming from distributed environments – both inside and outside the organization.

Data collection strategies

  • API: Automated, elastic, and self-service are all intrinsic characteristics for cloud computing. Most cloud service providers offer a management dashboard for convenience (and unsophisticated users), but advanced cloud features are typically exposed only via scripts and programs. Application Programming Interfaces (APIs) are the primary interfaces to cloud services; they are essential for configuring a cloud environment, configuring and activating monitoring, and gathering data. These APIs can be called from any program or service, running either on-premise or within a cloud environment. So APIs are the cloud equivalent to platform agents, providing many of the same capabilities in the cloud where a ‘platform’ becomes a virtualized abstraction and a traditional agent wouldn’t really work. API calls return data in a variety of ways, including the familiar syslog format, JSON files, and even various formats specific to different cloud providers. Regardless, aggregating data returned by API calls is a new key source of information for monitoring hybrid clouds.

  • Cloud Gateways: Hybrid cloud monitoring often hinges on a gateway – typically an appliance deployed at the ‘edge’ of the network to collect events. Leveraging the existing infrastructure for data management and SOC interfaces, this approach requires all cloud usage to first be authenticated to the cloud gateway as a choke point; after inspection, traffic is passed on to the appropriate cloud service. The resulting events are then passed to event collection services, comparable to on-premise infrastructure. This enables tight integration with existing security operations and monitoring platforms, and the initial authentication allows all resource requests to be tied to specific user credentials.

  • Cloud 2 Cloud: A newer option is to have one cloud service – in this case a monitoring service – act as a proxy to another cloud service; tapping into user requests and parsing out relevant data, metadata, and application calls. Similarly to using a managed service for email security, traffic passes through a cloud provider to parse incoming requests before they are forwarded to internal or cloud applications. This model can incorporate mobile devices and events – which otherwise never touch on-premise networks – by passing their traffic through an inspection point before they reach cloud service providers such as Salesforce and Microsoft Azure. This enables the SOC to provide real-time event analysis and alert on policy violations, with collected events forwarded to the SOC (either on-premise or in the cloud) for storage. In some cases by proxying traffic these services can also add additional security – such as checks against on-premise identity stores, to ensure employees are still employed before granting access to cloud resources.

  • App Telemetry: Like cloud providers, mobile carriers, mobile OS providers, and handset manufacturers don’t provide much in the way of logging capabilities. Mobile platforms are intended to be secured from outsiders and not leak information between apps. But we are beginning to see mobile apps developed specifically for corporate use, as well as company-specific mobile app containers on devices, which send basic telemetry back to the corporate customer to provide visibility into device activity. Some telemetry feeds include basic data about the device, such as jailbreak detection, while others append user ‘fingerprints’ to authorize requests for remote application access. These capabilities are compiled into individual mobile apps or embedded into app containers which protect corporate apps and data. This capability is very new, and will eventually help to detect fraud and misuse on mobile endpoints.

  • Agents: You are highly unlikely to deploy agentry in SaaS or PaaS clouds; but there are cases where agents have an important role to play in hybrid clouds, private clouds, and Infrastructure as a Service (IaaS) clouds – generally when you control the infrastructure. Because network architecture is virtualized in most clouds, agents offer a way to collect events and configuration information when traditional visibility and taps are unavailable. Agents also call out to cloud APIs to check application deployment.

  • Supplementary Services: Cloud SOCs often rely on third-party intelligence feeds to correlate hostile acts or actors attacking other customers, helping you identify and block attempts to abuse your systems. These are almost always cloud-based services that provide intelligence, malware analysis, or policies based on a broader analysis of data from a broad range of sites and data in order to detect unwanted behavior patterns. This type of threat intelligence supplements hybrid SOCs and helps organizations detect potential attacks faster, but it is not itself a SOC platform. You can refer to our other threat intelligence papers to dig deeper into this topic. (link to threat intel research)

Deployment Strategies

The following are all common ways to deploy event collectors, monitoring systems, and operations centers to support security monitoring:

  • On-premise: We will forgo a detailed explanation of on-premise SOCs because most of you are already familiar with this model, and we have written extensively on this topic. In general, the infrastructure that provides the ability to monitor a hybrid cloud remains the same. The most significant change is the inclusion of data from remote cloud, mobile events, and configuration data, along with monitoring policies designed to digest remote events. Be prepared for significant change – cloud and mobile event data formats vary, and typically include slightly different information from one source to the next. Remember all your work a decade ago to get connectors to properly parse security event data? You will be doing that again until a standard format emerges. You will also need a new round of tuning detection rules – naively acceptable activities for internal users and systems can be malicious, especially coming from remote locations or cloud services.

  • Hybrid: A hybrid SOC is any deployment model where some analysis is done in-house and some is performed remotely in the cloud. The remote portion could be offloaded to a monitoring service vendor, as described under “Cloud 2 Cloud” above, or perhaps preliminary “Level One” analysis is performed by the managed services team, with advanced analysis forensics handled by internal resources. Here you continue to run and operate the existing SIEM with all its event collectors, and send a subset of events to an external provider for the heavy lifting of event analysis and forensics. Alternatively you could use an external provider to directly aggregate and analyze remote/cloud activity, and send filtered alerts to the on-premise SOC. A hybrid SOC increases agility for addressing new challenges, while leveraging in-house investments and expertise, though of course there is a cost for maintaining both internal and external monitoring capabilities.

  • Exclusively Cloud: It is still rare but definitely possible to push all data from both on-premise and cloud services up to a third party for full remote SOC services. This entails the remote SOC providing all data management, analysis, policy development, and retention. On-premise events are fed through a gateway to the cloud service; the gateway provides some filtering, compression, and security to protect event data.

  • Third Party Management: Many large enterprises run security operations in-house, with a team of employees monitoring systems for attack and forensically analyzing suspicious alerts. But not every firm has a sophisticated and capable security team in-house to do the difficult and expensive work of writing policies and security analysis. So it is attractive (and increasingly common) to offload the difficult analysis problems to others, keeping only a portion of this role in-house. You have some flexibility in how to engage with a service provider. One approach is to have them take control of your on-premise monitoring systems. Alternatively, the third party can supplement what you have by handling just external cloud monitoring. Finally, in some cases the entire SOC is pushed to the third party for operations and management.

Our next post will sketch out what you really need to know to decide how to proceed: the Gotchas. We will run through the problem areas and tradeoffs you need to consider before selecting from the data collection and deployment options summarized above. We will dig into problems of scalability, cost, data security, privacy, and even some data governance issues that can make deciding between solutions more difficult.

—Adrian Lane

Firestarter: Numbness

By Rich

SSLmageddon V12. Polar Vortices. Ebola. APT123. We live in an era when every week it seems some massive new vulnerability, exploit, or attack is going to take down society. This week Rich, Mike, and Adrian tackle the endless progression of bad news; and how to maintain focus when everyone wants you to save the children.

As a side note, if you haven’t seen or read about #feministhackerbarbie on Twitter… oh my, you need to.

The audio-only version is up too.

—Rich

Friday, November 21, 2014

Securing Enterprise Applications [New White Paper]

By Adrian Lane

Securing enterprise applications is hard work. These are complex platforms, with lots of features and interfaces, reliant on database support, and often deployed across multiple machines. They leverage both code provided by the vendor, as well as hundreds – if not thousands – of supporting code modules produced specifically for the customer’s needs. This make every environment a bit different, and acceptable application behavior unique to every company. This is problematic because during our research we found that most organizations rely on security tools which work on the network fringes, around applications. These tools cannot see inside an application to fully understand its configuration and feature set, nor do they understand application-layer communication. This approach is efficient because a generic tool can see a wide variety of threats, but misses subtle misuse and most serious misconfigurations.

We decided to discuss some of our findings. But to construct an entire application security program for enterprise applications would require 100 pages of research, and still fail to provide complete coverage. Many firms have had enterprise applications from Oracle and SAP deployed for a decade or more, so we decided to focus on areas where the security problems have changed, or where tools and technology have superseded approaches that were perfectly acceptable just a couple years ago. This research paper spotlight these problem areas and offers specific suggestions for how to close the security gaps. Here is an except:

Supply chain management, customer relationship management, enterprise resource management, business analytics, and financial transaction management, are all multi-billion dollar application platforms unto themselves. Every enterprise depends upon them to orchestrate core business functions, spend tens of millions of dollars on software and support. We are beyond explaining why enterprise applications need security to protect these investments – it is well established that insiders and persistent adversaries target these applications. Companies invest heavily in these applications, hardware to run them, and teams to keep them up and running. They perform extensive risk analysis on their business implications and the costs of downtime. And in many cases their security investments are a byproduct of these risk profiles. Application security trends in the 1-2% range of total application investment, so we cannot say large enterprises don’t take security seriously – they spend millions and hire dedicate staff to protect these platforms. That said, their investments are not always optimal – enterprises may bet on solutions with limited effectiveness, without a complete understanding of the available options. It is time for a fresh look.

In this research paper, Building an Enterprise Application Security program, we will take a focused look at the major facets in an enterprise application security program, and make practical suggestions on how to improve efficiency and effectiveness of your security program. Or goal is to discuss specific security and compliance use cases for large enterprise applications, highlight gaps, and explain some application-specific tools to address these issues. This will not be an exhaustive examination of enterprise application security controls, rather a spotlight common deficiencies with the core pillars of security controls and products.

We would like to thank Onapsis for licensing this research. They reached out to us on this topic and asked to back this effort, which we are very happy about, because support like this enables us to keep doing what we do. You can download a copy of the research in our research library or download it directly: Securing Enterprise Applications. As always, if you have questions or comments, please drop us a line!

—Adrian Lane

Friday Summary: November 21, 2014

By Adrian Lane

Thus ends the busiest four weeks I have had since joining Securosis. A few conferences – AWS Re:Invent was awesome – a few client on-site days, meeting with some end customers, and about a half dozen webcasts, have together left me gasping for air. We all need a little R&R here and the holidays are approaching, so Firestarters and blog posts will be a bit sporadic. Technically it is still Friday, so here goes today’s (slightly late) summary.


I am ignorant of a lot of things, and I thought this one was odd enough that I would ask more knowledgable people in the community for assistance in explaining how this works. The story starts like this: A few months ago the new Lamborghini Huracan was introduced. Being a bit of a car weenie I went to the web site – http://huracan.lamborghini.com – in a Safari browser to see some pictures of the new car. Nice! I wish I could afford one – not that I would drive it much. I would probably just stare at it in the garage. Regardless, I had never been to the Lamborghini web site before. So I was a little surprised the next morning when I opened up a new copy of Firefox, which was trying to make a request to http://media.lamborghini.com. WTF? As I started to dig into this, I saw it was a repeating pattern. I visited http://www.theabsolutesound.com, and when I opened my newly installed Aviator browser, it tried to connect to http://media.theabsolutesound.com. Again, I had never been to that site in the Aviator browser, but recently visited it from FF. Amazon Web services, Tech Target, and a dozen or so requests to connect to media.sitename.com or files.sitename.com popped up. But the capper was a few weeks later, when my computer tried to send the same request to media.theabsolutesound.com from an email client! That is malware behavior, likely adware!

So is this behavior part of an evercookie Flash/Java exploit through persistent data? I had Java disabled and Flash set to prompt before launch, so I thought a successful cross-browser attack via those persistence methods was unlikely. Of course it is entirely possible that I missed something. Anyway, if you know about this and would care to explain it – or have a link – I would appreciate an education on current techniques for browser/user tracking. I am clearly missing something.

As a side note, as I pasted the huracan.lamborghini.com link into my text editor to wrote this post, an Apple services daemon tried to send a packet to gs-loc.apple.com with that URL in it. Monitor much? If you don’t already run an outbound firewall like Little Snitch, I highly recommend it. It is a great way to learn who sends what where and completely block lots of tracking nonsense.


Puppy names. Everybody does it: before you get a new puppy you discuss puppy names. Some people even buy a book, looking for that perfect cute name to give their snugly little cherub. They fail to understand their mistake until after the puppy is in their home. They name the puppy from the perspective of prepuppy normal life. Let me save you some trouble and provide some good puppy names for you, ones more appropriate for the post-puppy honeymoon:

  • “Outside!” – the winner by a landslide.
  • “Drop-It!”
  • “Stinky!”
  • “No, no, no!”
  • “Bad!”
  • “Not again!”
  • “Stop!”
  • “OWW, NO!”
  • “Little bastard”
  • “Come here!”
  • “Droptheshoe!”
  • “AAhhhhrrrr”
  • “F&%#” or the swear word of you choice.

Trust me on this – the puppy is going to think one of these is their name anyway, so starting from this list saves you time. My gift to you.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

—Adrian Lane

Thursday, November 13, 2014

Ticker Symbol: HACK

By Gunnar

I think the financial equivalent of jumping shark is Wall Street creating an ETF based on your theme.

If so, cybersecurity has arrived.

The ISE Cyber Security Index provides a benchmark for investors interested in tracking companies actively involved in providing technology and services that are designed to protect data, networks, hardware, software, and other cyber-dependent devices from unauthorized access and attacks. The index includes twenty-nine constituent companies, including VASCO Data Security International Inc. (ticker: VDSI), Palo Alto Networks Inc. (ticker: PANW), Symantc Corp. (ticker: SYMC), Juniper Networks Inc. (ticker: JNPR), FireEye Inc. (ticker: FEYE), and Splunk Inc. (ticker: SPLK).

Before you invest your life savings in ETFs, listen to Vanguard founder Jack Bogle: “The ETF is like the famous Purdy shotgun that’s made over in England. It’s great for big game hunting, and it’s great for suicide.”

Two interesting things to look at in ETFs are fees and weighting. The fees on this puppy look to be 0.75% – outlandishly high. For comparison Vanguard’s Dividend Growth ETF has a 0.1% fee. It is true that with foreign ETFs the fees are higher (to access foreign markets), but I do not know why HACK should have such a high fee – the shares they list are liquid and widely traded. Foreign issues themselves do not seem to dictate such a lavish expense ratio.

As of October 30, 2014, the Underlying Index had 30 constituents, 6 of which were foreign companies, and the three largest stocks and their weightings in the Underlying Index were VASCO Data Security International, Inc. (8.57%), Imperva, Inc. (6.08%), and Palo Alto Networks, Inc. (5.49%).

I cannot tell how it is weighted but if they follow the weighting on ISE then investors will wind up almost 10% into Vasco. The largest members of the index, per ISE, are:

Vasco: 9.17%
Imperva: 7.57%
Qualys: 5.48%
Palo Alto: 5.35%
Splunk: 5.18%
Infoblox: 5.04%

That is near 40% in the top six holdings – pretty concentrated. The old school way to index is to weight by market capitalization, but that has been shown to be imperfect because size alone does not determine quality. The preferred weighting for the last few years (since Rob Arnott’s work) has been by value, which bases the percentage of each holding on value metrics like P/E. There is considerable evidence that this works much better than market cap. But we still have a problem: many tech companies, especially new ones, have no earnings! From reverse engineering the index membership it looks like they are using Price/Sales for weighting. For example:

Vasco has a Price/Sales ratio 6.1.
Palo Alto has a P/S ratio of 13.5.

Vasco has about twice the weighting of Palo Alto because it is about twice as cheap on a Price to Sales basis. This is probably not best way to do it, but it is probably the best available way because market cap is flawed and would miss all the upstarts. Due to lack of earnings value metrics are a non-starter. The weightings appear roughly right per Price/Sales, but I could not get the numbers to work precisely. It is possible they are using an additional weighting factor like Relative Strength.

Needless to say, this is all in the spirit of “As the Infosec Industry Turns…” and not financial advice of any kind. This is not a recommendation to buy, sell, or hold any of the issues mentioned.

In the meantime remember the fees, and this from Jack Bogle: “Performance comes and goes but cost goes on forever.”

HACK SEC filing

—Gunnar

Wednesday, November 12, 2014

Incite 11/12/2014: Focus

By Mike Rothman

Interruption is death for a writer. At least it is for me. I need to get into a flow state, where I’m locked in and banging words out. With my travel schedule and the number of calls I make even when not traveling, finding enough space to get into flow has been challenging. Very challenging. And it gets frustrating. Very frustrating.

There is always some shiny object to pay attention to. A press release here. A tweet fight there. Working the agenda for a trip two weeks from now. Or something else that would qualify as ‘work’, but not work.

get your head right and concentrate...

Then achiever’s anxiety kicks in. The blog posts that get pushed back day after day, and the conflicts with projects needing to get started. I have things to do, but they don’t seem to get done. Not the writing stuff anyway. It’s a focus thing. More accurately a lack of focus thing. Most of the time I indulge my need to read NFL stories or do some ‘research’. Or even just to think big thoughts for a little while.

But at some point I need to write. That is a big part of the business, and stuff needs to get done. So I am searching for new ways to do that. I shut down email. That helps a bit. I don’t answer the phone and don’t check Twitter. That helps too. Maybe I will try a new writing app that basically shuts down all the other apps. Maybe that will help ease the crush of the overwhelming to-do list.

Of course my logical mind knows you just start writing. That I need to stop with the excuses and just write. I know the first draft is going to be crap, especially if it’s not flowing. I know that the inbound emails can wait a few hours. I know my Twitter timeline will be there after the post is live on the site. Yet my logical mind loses, as I just stare at the screen for a few more minutes. Then check email and Twitter. Again.

Oy. Then I go into my pipeline tracker and start running numbers for the impact of not writing on my wallet. That helps. Until it doesn’t. We have had a good year, so the monkey brain wonders whether it’s not really a bad idea to just sandbag some of the projects and get 2015 off to a roaring start. But I still need to write.

Then at some point, I just write. The excuses fall away. The words start to flow, and even make some sense. I get laser focused on the research that needs to get done, and it gets done. The blog fills up with stuff, and balance is restored to my universe. And I resign myself to just carrying around my iPad when I really need to write, because it’s harder to multi-task on that platform.

I’ll get there. It’ll just take a little focus.

–Mike

Photo credit: “Focus” originally uploaded by Michael Dales


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Network Security Gateway Evolution

Monitoring the Hybrid Cloud: Evolving to the CloudSOC

Building an Enterprise Application Security Program

Security and Privacy on the Encrypted Network

Newly Published Papers


Incite 4 U

  1. Master of the Obvious: Cloud Edition: On my way to the re:Invent conference I read the subhead of a FUD-tastic eWeek article: IT Losing the Battle for Security in the Cloud, which is “More than two-thirds of respondents to a Ponemon Institute survey say it’s more difficult to protect sensitive data in the cloud using conventional security practices.” Um. This is news? The cloud is different! So if you want to secure it you need to do so differently. The survey really shows that most folks have no idea what they are talking about, expected in the early adoption phase of any technology. It is not actually necessarily harder to protect resources in the cloud. I just laugh and then cry a bit, as I realize the amount of education required for folks to understand how to do things in the cloud. I guess that is opportunity for guys like us, so I won’t cry too long… – MR

  2. Here we go again: There are a half dozen tokenization working groups proposing standards by my count. Each has vagueness baked into its published specification – many intentionally, I suspect. There are issues the internal steering groups can’t agree upon, issues they want to let the market settle before they commit, and still other issues they simply did not think about. Eduard Kovaks at SecurityWeek offers a good overview on current tokenization issues for the payment space – including re-usable tokens (or not), where original PAN data may be stored, and whether “cryptographically reversible” tokens (not actual tokens – really encrypted data) should be accepted. A technology as simple and easy to understand as tokenization is entering another phase of debate, largely because many firms see having their fiefdoms threatened by change to the insecure payment status quo – so they propose something that doesn’t actually satisfy the goal: not to surrender a consumer credit card to a merchant. – AL

  3. FinServ productizes threat intel: The financial services industry has been at the forefront of information security for years. I guess when an industry is such a large target they didn’t really have a choice. The FinServ folks have also been aggressive about sharing information because they know attackers use the same methods against multiple banks. Now the FS-ISAC (FinServ’s main security information sharing group) has built a technology platform to facilitate information sharing called Soltra Edge. It is (yet another) offering for threat intel… because there weren’t any in the market already, apparently. Will it work and gain traction? Who knows? I do know that being in the software business is quite a bit different than running an information sharing group, but more structured sharing of information is generally a good thing. – MR

  4. Just call, baby: During each of the last two Christmas seasons I have noticed small charges on my credit cards. When I called the bank they explained they were seeing those charges across a wide number of cards (a rare admission!) and re-issued the card – assuming it was compromised. And it’s getting close to that time of year again, when you use your credit card so many times you can’t really remember what you spent. Bill Brenner has a very good list of tips for Online Holiday Spending to make sure someone does not sneak charges onto your account. It is the list same I would have created last week – these are good basic tips to follow. But after this Sunday I have another tip: call the merchant! On the phone. I know that seems so… 1995… but it works. Two sites I was using had issues getting credit card payments to process from the web, so I emailed, and they called me back! On Sunday. Both firms had people working to help with orders. Each took my order and processed the payment, and it actually happened faster than I could have done it online. And one of those firms accepts user credentials without SSL, so the phone was much safer. – AL

—Mike Rothman

Monday, November 10, 2014

Building an Enterprise Application Security Program: Recommendations

By Adrian Lane

Our goal for this series is not to cover the breadth and depth of an entire enterprise application security program – most of you have that covered already. Instead it is to identify the critical gaps at most firms and offer recommendations for how to close them. We have covered use cases and pointed out gaps; now it’s time to offer recommendations for how to address the deficiencies. You will notice many of the gaps noted in the previous section are byproducts of either a) attackers exposing soft spots in security; or b) innovation with the cloud, mobile, and analytics changing the boundaries of what is possible.

Core Program Elements

Identity and Access Management: Identity and authorization mapping form your critical first line of defense for application security. SAP, Oracle, and other enterprise application vendors offer identity tools to link to directory services, help with single sign-on, and help map authorizations – key to ensuring users only get data they legitimately need. Segregation of duties is a huge part of access control programs, and your vendor likely covers most of your needs from within the platform. But there is an over-reliance on basic services, and while many firms have stepped up to integrate multiple identity stores with federated identity, attackers have shown most enterprises need to improve in some areas.

  • Passwords: Passwords are simply not very good as a security control, and password rotation has never been proven to actually increase security; it turns out to actually be IT overhead for compliance’s sake. Phishing has proven effective for landing malware on users’ machines, enabling subsequent compromises, so we recommend two-factor authentication – at least for all administrative access. 2-factor is commonly available and can be integrated out-of-band to greatly increase the security of privileged accounts.
  • Mobile: Protecting your users running your PCs on your network behind your firewalls is simply old news. Mobile devices are a modern – and prevalent – interface to enterprise applications. Most users don’t wait for your IT department to make policy or supply devices – they go buy their own and start using them immediately. It is important to consider mobile as an essential extension of the traditional enterprise landscape. These ‘new’ devices demand special consideration for how to deploy identity outside your network, how to _de-_provision users who have leave, and whether you need to quarantine data or apps on mobile devices. Cloud or ‘edge’ identity services, with token-based (typically SAML or OpenID) identity and mobile application management, should be part of your enterprise application security strategy.

Configuration and Vulnerability Management: When we discussed why enterprise applications are different we made special mention several deficiencies in assessment products – particularly their ability to collect necessary information and lack of in-depth policies. But assessment is still one of the most powerful tools at your disposal, and generally the mechanism for validating 65% of security and compliance policies. It helps automate hundreds of repetitive, time-consuming, and highly technical system checks. We know it sounds cliche, but this really does save compliance and security teams time and money. These tools come with the most common security and compliance policies embedded to reduce custom development, and most provide a mechanism for non-technical stakeholders to obtain the technical data they need for reporting. You probably have something in place already, but there is a good chance it misses a great deal of what tools designed specifically for your application could acquire. We recommend making sure your product can obtain data, both inside and outside the application, with a good selection of policies specific to your key applications. A handful of generic application policies are a strong indicator that you have the wrong tool.

Data Encryption: Most enterprise applications were designed and built with some data encryption capabilities. Either the application embeds its own encryption library and key management system, or it leverages the underlying database encryption engine to encrypt specific columns – or entire schemas – of data. Historically there have been several problems with this model. Many firms discovered that despite encrypted data, database indices and transaction logs contained and leaked unencrypted information. Additionally, encrypted data is stored in binary format, making it very difficult to calculate or report across. Finally, encryption has created performance and latency issues. The upshot is that many firms either turned encryption off entirely or removed it on temporary tables to improve performance. Fortunately there is an option which offers most of the security benefits without the downsides: transparent data encryption. It works underneath the application or database layer to encrypt data before it is stored on disk. It is faster that column encryption, transparent so no application layer changes are required, and avoids the risk of accidentally leaking data. Backups are still protected and you are assured that IT administrators cannot view readable data from disk. We recommend considering products from several application/database vendors and some third-party encryption vendors.

Firewalls & Network Security: If you are running enterprise applications, you have firewalls and intrusion detection systems in place. And likely you also have next-generation firewalls, web application firewalls, and/or data loss prevention systems protecting you applications. Because these investments are already paid for and in place, they tend to be the default answer to any application security question. The law of the instrument states that if all you have is a hammer, everything looks like a nail. The problem is that these platforms are not optimal for enterprise application security, but they are nonetheless considered essential because every current security job falls to them. Unfortunately they do a poor job with application security because most of them were designed to detect and address network misuse, but they do not understand how enterprise applications work. Worse, as we shift ever more toward virtualization and the cloud, physical networks go away, making them less useful in general. But the real issue is that a product which was not designed to understand the application cannot effectively monitor its use or function. We recommend looking at application-specific monitoring tools and application gateways – discussed in the next section – to detect and block application-specific attacks. We do not suggest throwing out your existing investments, but some problems are best addressed in a different fashion, and you need to balance network security against application security to be effective.

Logging and Auditing: You need audit logs for visibility into system usage, covering the areas simple assessments cannot, but many firms only capture the subset of events which are easy to get. Discussing logging with enterprises is difficult because every they all log; unfortunately most do it poorly and don’t want to be reminded of their SIEM and log management headaches. As we mentioned in our discussion of why enterprise applications are different, as a rule application logs were not designed for security and compliance. So many firms leave application logging off, and instead use network logs to seed security systems. There are a couple of reasons not to go this way. First, many platform providers now understand that logs are used for security and audit teams more than for IT, and have adjusted log content and system performance to accomodate this. Second, third-party application and database monitoring systems handle complex data capture and filtering for you. Finally, some complex data types can be captured and correlated with other data to deliver on yesterday’s promises. The available data is improving while the overhead (cost) of collection is shrinking. We see a convergence between the continually dropping cost of storage, improved scalability of SIEM and Log Management systems, and “big data” databases which make roll-your-own collection and analysis clusters feasible at a fraction of the cost of a 2-year-old data warehouse. Our research shows that enterprise security operations centers are collecting both new types of data, and from many more sources, in order to have sufficient information to detect attacks or perform forensic analysis. What was collected just a couple years ago is simply not adequate given current threats. You may feel you have a need to produce better log data for security or compliance, but we want to make clear most of the impediments enterprises had with log data collection are no longer at issue.

Overlooked Elements

Application and Database Monitoring: Monitoring application activity, with full understanding of how that application works, is the most common gap in enterprise security strategies. Application monitoring and database activity monitoring tools, at minimum, capture and record all application activity (including administrator activity) in real time or near real time; across multiple platforms; and alert and/or block on policy violations. The tools remotely monitor application events, collected from a combination of sources, and collected in a central location for analysis. Think of them as an application specific SIEM and IDS combo. They are designed to understand how application platforms work down to the transaction level, with multiple methods (including heuristics, metadata, user behavior, attributes, command whitelists, and command blacklists) available to analyze events. These tools are focused on the application layer, and designed to understand the specific nuances of these platforms to provide more granular and more effective security controls. They work at the application layer so they are typically deployed one of three ways: as an agent on the application platform, as a reverse proxy for the application, or embedded into the application itself. Our recommendation is to use one of these platforms to monitor application events; general-purpose network monitors do not fully understand applications, which causes more of both false positives (false alarms) and false negatives (missed attacks).

API Gateways: In their rush to provide more services in order to promote customer usage and affinity, large enterprises have reimplemented traditional back-office applications to directly support customer-facing web applications. Many enterprise applications, designed and built before the Internet, have been recast as front-line customer-facing services. These platforms provide data and transactional services to support web applications, put their security is often a disaster. Command injection, SQL injection, remote vulnerability exploits, and compromise of administrative accounts are all common. In the last couple years, to support safe access to enterprise application services for remote users – particularly to support mobile applications – several firms have developed API gateways. They offer an abstraction layer which maps back-office application functions to the most common modern programming interface: RESTful APIs. The gateway builds in version control, testing facilities, support for third-party developers, detection of jailbroken devices and other signs of potential fraud, policy support for mobile app usage, integration with internal directory services for provisioning/de-provisioning, and token-based user credentials for improved identity management. If you intend to leverage enterprise applications to support end users we recommend moving past the simple firewall and web filtering security model to API gateways.

Penetration Testing: Penetration testers offer an invaluable service, going outside the box to attack application security and find ways to bypass or break security controls. Most pen tests discover unknown defects in applications or deployments. This approach is so powerful because the tests find issues developers did not even know to look for. Enterprise applications and databases incorporate a great deal of custom code, which naturally includes unique vulnerabilities. Attackers are good at finding these defects, with very powerful software tools to help them. A good tester finds these defects using the same tools, as well as issues you don’t have policies for, possibly examining aspects of security you were not even aware of. Of course people who know what they are doing cost money. But you should test on an ongoing basis anyway. Every time a new version of your application, a new server, or a change to your custom code occurs, you have potential for new vulnerabilities. We are big proponents of penetration testing, and strongly recommend having a service regularly check your sites. That said, there are many good third-party commercial and open source tools available if money is tight and you have expertise on staff.

These enterprise applications security recommendations address both interfaces and internal workings. Shoring up deficiencies with preventative controls, opting for more appropriate security controls, and building in real-time monitoring greatly improve security. We understand that some of these recommendation incrementally add to security or compliance budget, and we are not fans of telling people to “Do more with more.” We appreciate how difficult security budget is to obtain, so we are very sensitive about making these types of recommendations. But the good news is that the majority of our recommendations are for tools that automate existing work, bundling reporting and policies that would otherwise require manual work. And in several cases our recommendations only reallocate where existing budget is spent, using tools more focused on core applications.

—Adrian Lane

Changing Pricing (for the first time ever)

By Rich

This is a corporate news post, so skip it if all you want is our usual snarky security analysis.

For the first time since starting Securosis we are increasing our prices. Yes, it has been over seven years without any change in pricing for our services. The new prices are only a modest bump, and also streamlined to remove the uncertainty of travel expenses on engagements. Call it ego, but we think we are a heck of a bargain.

This only affects speaking/strategy days and retainers. Papers, Securosis Project Accelerator workshops, and one-off projects aren’t changing.

  • Strategy day pricing stays the same at $6,000, but we are adding in $1,000 for travel expenses and will no longer bill travel separately (total of $7,000 for a strategy day or speaking engagement which involves travel).
  • Webcasts stay the same, at $5,000 if we don’t need to travel.
  • Our retainer rates are increasing slightly, around $2-3K each, with $2,000 also being added to our Platinum plan to cover the travel for the two included strategy days:
    • $10K for Silver.
    • $15K for Gold.
    • $25K for Platinum.

The new pricing goes into effect immediately for all new clients and renewals.

As a reminder, for our papers we offer licenses, not sponsorship, so nothing has changed there. Securosis Project Accelerators (our focused end-user workshops for SaaS providers, enterprise cloud security, security management, network security, and database/big data security) are still $10,000. We do have some other workshops in the… works for next year, so if you are interested in another topic just ask.

If you have any other questions, just go ahead and email. Service levels remain the same. You can only blame yourselves for keeping us so darn busy.

—Rich

Monitoring the Hybrid Cloud: Emerging SOC Use Cases

By Mike Rothman

In the introduction to our series on Monitoring the Hybrid Cloud we went through all the disruptive forces which are increasingly complicating security monitoring. These include the accelerating move to cloud computing and expanding access via mobile devices. Those new models require much greater automation, and significantly less visibility and control over the physical layer of the technology stack. So you need to think about monitoring a bit differently.

This starts with getting a handle on the nuances of monitoring, depending on where applications run, so we will discuss monitoring both IaaS (Infrastructure as a Service) and SaaS (Software as a Service). Not that we discriminate against PaaS (Platform as a Service), but it is similar enough to IaaS that the concepts presented are similar. We will also talk about private clouds because odds are you haven’t been able to unplug your data center, so you need to provide an end-to-end view of the infrastructure you use, including both technology you control (in your data center) and stuff you don’t (in the cloud).

Monitoring IaaS

The biggest and most obvious challenge in monitoring Infrastructure as a Service is the difference in visibility because you don’t control the physical stack. So you are largely restricted to logs provided by your cloud service provider. We see pretty good progress in the depth and granularity available from these cloud log feeds, but you still get much less detail than from devices in your data center.

You also cannot tap the network to get actual traffic (packet capture). IaaS vendors offer abstracted networking so many networking features you have come to rely on aren’t available. Depending on the maturity of your security program and incident response process, you may not be doing much packet capture on your environment now, but either way it is no longer an option now in the cloud.

We will go into more detail later in this series, but one workaround is to run all traffic through a cloud-based choke point for collection. In essence you perform a ‘man-in-the-middle’ attack on your own network traffic to regain a semblance of the visibility inside your own data center, but that sacrifices much of the architectural flexibility drawing you to the cloud in the first place.

You also need to figure out where to both aggregate collected logs (both from the cloud service and from specific instances) and where to analyze them. These decisions hinge on a number of factors including where the technology stacks run, the kinds of analysis to perform, and what expertise is available on staff. We will tackle specific architectural options in our next post.

Monitoring SaaS

If monitoring IaaS offers a ‘foggy’ view compared to what you see in your own data center, Software as a Service is ‘dark’. You see what the SaaS provider shows you, and that’s it. You have access to neither the infrastructure running your application, nor the data stores that house your data. So what can you do?

You can take solace in the fact that many larger SaaS vendors are starting to get the message from angry enterprise clients, and providing an activity feed you can pull into your security monitoring environment. It won’t provide visibility into the technology stack, but you will be able to track what your employees are doing within the service – including administrative changes, record modifications, and login history.

Keep in mind that you will need to figure out thresholds and actions to alert on, mostly likely by taking a baseline of activity and then looking for anomalies. There are no out-of-the-box rules to monitor SaaS. And as with IaaS you need to figure out the best place to aggregate and analyze data.

Monitoring a Private Cloud

Private clouds virtualize your existing infrastructure in your own data center, so you get full visibility, right? Not exactly. You will be able to tap the network within the data center for additional visibility. But without proper access and instrumentation within your private cloud you cannot see what is happening within the virtualized environment.

As with IaaS, you can route network traffic within your private cloud through an inspection point, but again that would reduce flexibility substantially. The good news is that many existing security monitoring platforms are rapidly adding the ability to monitor within virtual collection points which run in a variety of private cloud environments. We will address alternatives to extend your existing monitoring environment later in this series.

SLAs are your friend

As we teach in the CCSK (Certificate of Cloud Security Knowledge) course, you really don’t have much leverage to demand access to logs, events, or other telemetry in a cloud environment. So you will want to exercise whatever leverage you have during the procurement process; document specific logs, access, etc. in your agreements. You will find that some cloud providers (the smaller ones) are much more willing to be contractually flexible than the cloud gorillas. So you will need to decide whether the standard level of logging from the big guys is sufficient for the analysis you need.

The key is that once you sign an agreement, what you get is what you get. You will be able to weigh in on product roadmaps and make feature requests, but you know how that goes.

CloudSOC

If a large fraction of your technology assets have moved into the cloud there is a final use case to consider: moving the collection, analysis, and presentation functions of your monitoring environment into the cloud as well. It may not make much sense to aggregate data from cloud-based resources, and then move the data to your on-premise environment for analysis. More to the point, it is cheaper and faster to keep logs and event data in low-cost cloud storage for future audits and forensic analysis.

So you need to weigh the cost and latency of moving data to your in-house monitoring system against running monitoring and analytics in the cloud, in light of the varying pricing models for cloud-based versus on-premise monitoring.

But the reality is that you are likely to run a hybrid for a while, with infrastructure in both places (cloud and on-premise data center) for the foreseeable future. As we mentioned above, there are a bunch of decision points to figure out whether SOC systems should run in the cloud, on-premise, or both. Our next post will dig into possible architectures for cloud-based collection, and a variety of hybrid SOC models.

—Mike Rothman

Sunday, November 09, 2014

Leveraging Threat Intelligence in Incident Response/Management [Final Paper]

By Mike Rothman

We continue to investigate the practical use of Threat Intelligence (TI) within your security program. After tackling how to Leverage Threat Intel in Security Monitoring, we now turn our attention to Incident Response and Management. In this paper we go deep into how your existing incident response and management processes can (and should) integrate adversary analysis and other threat intelligence sources, to help narrow down the scope of your investigations.

We have also put together a snappy process map depicting how IR/M looks when you factor in external data.

TI+IR Process Map

To really respond faster you need to streamline investigations and make the most of your resources. That starts with an understanding of what information would interest attackers. From there you can identify potential adversaries and gather threat intelligence to anticipate their targets and tactics. With that information you can protect yourself, monitor for indicators of compromise, and streamline your response when an attack is (inevitably) successful.

TIIR Table of Contents

You will have incidents. If you can respond to them faster and more effectively that’s a good thing, right? Integrating Threat Intel into the IR process is one way to do that.

We’d like to thank Cisco and Bit9 + Carbon Black for licensing the content in this paper. We are grateful that our clients see the value of supporting objective research to educate the industry. Without forward-looking organizations you would be on your own… or paying up to get behind the paywall of big research.

Check out the paper’s landing page, or download it directly: Leveraging Threat Intelligence in Incident Response/Management (PDF).

—Mike Rothman

Friday, November 07, 2014

Network Security Gateway Evolution [New Series]

By Mike Rothman

When is a firewall not a firewall? I am not being cute – that is a serious question. The devices that masquerade as firewalls today provide much more than just an access control on the edge of your network(s). Some of our influential analyst friends dubbed the category next generation firewall (NGFW), but that criminally undersells the capabilities of these devices.

The “killer app” for NGFW remains enforcement of security policies by application (and even functions within applications), rather than merely by ports and protocols. This technology has matured since we last covered the enterprise firewall space in Understanding and Selecting an Enterprise Firewall. Virtually all firewall devices being deployed now (except very low-end gear) have the ability to enforce application-level policies in some way. But, as with most new technologies, having new functionality doesn’t mean the capabilities are being used well. Taking full advantage of application-aware policies requires a different way of thinking about network security, which will take time for the market to adapt to.

At the same time many network security vendors continue to integrate their previously separate FW and IPS devices into common architectures/platforms. They have also combined network-based malware detection and some light identity and content filtering/protection features. If this sounds like UTM, that shouldn’t be surprising – the product categories (UTM and NGFW) provide very similar functionality, just handled differently under the hood.

Given this long-awaited consolidation, we see rapid evolution in the network security market. Besides additional capabilities integrated into NGFW devices, we also see larger chassis-based models, smaller branch office devices, and even virtualized and cloud-based configurations to extend these capabilities to every point in the network. Improved threat intelligence integration is also available to block current threats.

Now is a good time to revisit our research from a couple years ago. The drivers for selection and procurement have changed since our last look at the field. But, as mentioned above, these devices are much more than firewalls. So we use the horribly pedestrian Network Security Gateway moniker to describe what network security devices look like moving forward. We are pleased to launch the Network Security Gateway Evolution series, describing how to most effectively use the devices for the big 3 network security functions: access control (FW), threat prevention (IPS), and malware detection.

Given the forward-looking nature of our research, we will dig into a few additional use cases we are seeing – including data center segmentation, branch office protection, and protecting those pesky private/public cloud environments.

We would like to thank Cisco and Palo Alto Networks for being the initial licensees of the paper resulting from this blog series. As always, we develop our research using our Totally Transparent Research methodology, ensuring no hidden influence on the research.

The Path to NG

Before we jump into how the NSG is evolving, we need to pay our respects to where it has been. The initial use case for NGFW was sitting next to an older port/protocol firewall and providing visibility int which applications are being used, and by whom. Those reports showing, in gory detail, all the nonsense employees get up to on the corporate network (much of it using corporate devices) at the end of the product test, tend to be quite pretty enlightening for the network security team and executives.

Once your organization saw the light with real network activity, you couldn’t unsee it. So you needed to take action, enforcing policies on those applications. This action leveraged capabilities such as blocking email access via a webmail interface, detecting and stopping file uploads to Dropbox, and detecting/preventing Facebook photo uploads. It all sounds a bit trivial nowadays, but a few years ago organizations had real trouble enforcing this kind of policies on web traffic.

Once the devices were enforcing policy-based control over application traffic, and then matured to offer feature parity with existing devices in areas like VPN and NAT, we started to see significant migration. Some of the existing network security vendors couldn’t keep up with these NGFW competitive threats, so we have seen a dramatic shift in the enterprise market share over the past few years, creating a catalyst for multi-billion M&A.

The next step has been the move from NGFW to NSG through adding non-FW capabilities such as threat prevention. Yes, that means not only enforcement of positive policies (access control), but also detecting attacks like a network intrusion prevention device (IPS) works. The first versions of these integrated devices could not compare to a ‘real’ (standalone) IPS, but as time marches on we expect NSGs to reach feature parity for threat prevention. Likewise, these gateways are increasingly integrating detection malware files as they enter the network, in order to provide additional value.

Finally, some companies couldn’t replace their existing firewalls (typically for budget or political reasons), but had more flexibility to replace their web filters. Given the ability of NSGs to enforce policies on web applications, block bad URLs, and even detect malware, standalone web filters took a hit. As with IPS, NSGs do not yet provide full feature parity with standalone web filters yet. But many companies don’t need the differentiating features of a dedicated web filter – making an NSG a good fit.

The Need for Speed

We have shown how NSGs have and will continue to integrate more and more functionality. Enforcing all these policies at wire speed requires increasing compute power. And it’s not like networks are slowing down. So first-generation NGFW reached scaling constraints pretty quickly. Vendors continue to invest in bigger iron, including more capable chassis and better distributed policy management, to satisfy scalability requirements.

As networks continue to get faster, will the devices be able to keep pace, retaining all their capabilities on a single device? And do you even need to run all your stuff on the same device? Not necessarily. This raises an architectural question we will consider later in the series. Just because you can run all these capabilities on the same device, doesn’t mean you should…

Alternatively you can run a NSG in “firewall” mode, just enforcing basic access control policies. Or you can deploy another NSG in “threat prevention” mode, looking for attacks. Does that sound like your existing architecture? Of course – and there is value in separating functions, depending on the scale of the environment. More important is the ability to manage all these policies from a single console, and to change the box’s capabilities through software, without needing a forklift.

Graceful Migration

We will also cover how you can actually migrate to this evolved network security platform. Budgets aren’t unlimited, so unless your existing network security vendor isn’t keeping pace (there are a few of those), your hand may not be forced into immediate migration. That gives you time to figure out the best timing to introduce these new capabilities. We will wrap up this series by with a process for figuring out how and when to introduce these new capabilities, deployment architectures, and how to select your key vendor.

The next post will dig into the firewall features of NSG, and how they continue to evolve, and why it matters to you.

—Mike Rothman

New Research Paper: Secure Agile Development

By Adrian Lane

Security teams are tightly focused on bringing security to applications, and meeting compliance requirements in the delivery of applications and services. On the other hand job #1 for software developers is to deliver code faster and more efficiently, with security a distant second. Security professionals and developers often share responsibility for security, but finding the best way to embed security into the software development lifecycle (SDLC) is not an easy challenge.

Agile frameworks have become the new foundation for code development, with an internal focus on ruthlessly rooting out tools and techniques that don’t fit this type of development. This means secure development practices, just like every other facet of development, must fit within the Agile framework – not the other way around. This paper offers an outline for security folks to understand development teams’ priorities and methodologies, and practical ways to work together within the Agile methodology. Here is an excerpt:

Over the past 15 years, the way we develop software has changed completely. Development processes evolved from Waterfall, to rapid development, to extreme programing, to Agile, to Agile with Scrum, to our current darling: DevOps. Each evolutionary step was taken to build better software by improving the software building process. And each step embraced changes in tools, languages, and systems to encourage increasingly agile processes, while discouraging slower and more cumbersome processes.

The fast flux of development evolution gradually deprecated everything that impeded agility … including security. Agile had an uneasy relationship with security because its facets which promoted better software development (in general) broke existing techniques for building security into code. Agile frameworks are the new foundation for code development, with an internal focus on ruthlessly rooting out tools and techniques that don’t fit the model. So secure development practices, just like every other facet of development, must fit within the Agile framework – not the other way around.

We are also proud that Veracode has asked to license this content; without support like this we could not bring this quality research to you free of charge without registration. As with all our research, if you have questions or comments we encourage you to comment on the blog so open discussion can help the community.

For a copy of the research download the PDF, or get a copy from our research library page on Secure Agile Development.

—Adrian Lane

Summary: Comic Book Guy

By Rich

Rich here.

I only consistently read comic books for a relatively short period of my life. I always enjoyed them as a kid but didn’t really collect them until sometime around high school. Before that I didn’t have the money to buy them month to month. I kept up a little in college, but I probably had less free capital as a freshman than in elementary school. Gas money and cheap dates add up crazy fast.

Much to my surprise, at the ripe old age of forty-something, I find myself back in the world of comics. It all started thanks to my kids and Netflix. Netflix has quite the back catalog of animated shows, including my all-time favorite, Spider-Man and His Amazing Friends. You know: Iceman and Firestar. I really loved that show as a kid, and from age three to four it was my middle daughter’s absolute favorite.

Better yet, my kids also found Super Hero Squad; a weird and wonderful stylized comedy take on Marvel comics that ran for two seasons. It was one of those rare shows loaded with jokes targeting adults while also appealing to kids. It hooked both my girls, who then moved on to the more serious Avengers Assemble, which covered a bunch of the major comics events – including Secret Invasion, which ran as a season-long story arc.

My girls love all the comics characters and stories. Mostly Marvel, which is what I know, but you can’t really avoid DC. Especially Wonder Woman. Their favorite race is the Super Hero Run where we all dress in costumes and run a 5K (I run, they ride in the Helicarrier, which civilians call a “jog stroller”). When it comes to ComiCon, my oldest will gut me with a Barbie if I don’t take her.

The there are the movies. The kids are too young to see them all (mostly just Avengers), but I am stunned that the biggest movies today are all expressions of my childhood dreams. Good comic book movies? With plot lines that extend a decade or more? And make a metric ton of cash? Yes, decades. In case you hadn’t heard, Disney/Marvel announced their lineup through 2019. 2-3 films per year, with interlocking television shows on ABC and Netflix, all leading to a 2-film version of the Infinity Wars. My daughter wasn’t born when Iron Man came out, and she will be 10 when the final Avengers (announced so far) is released.

Which is why I am back on the comics. Because I am **Dad*, and while I may screw up everything else, I will sure as hell make sure I can explain who the Skrull are, and why Thanos wants the Infinity Gems. I am even learning more about the Flash, and please forgive me, Aquaman.

There are few things as awesome as sharing what you love with your kids, and them sharing it right back. I didn’t force this on my kids – they discovered comics on their own, and I merely encouraged their exploration. The exact same thing is happening with Star Wars, and in a year I will get to take my kids to see the first new film with Luke, Leia, and Han since I was a kid.

My oldest will even be the same age I was when my father took me to Star Wars for the first time. No, those aren’t tears. I have allergies.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike Rothman: Friday Summary: Halloween. Adrian and Emily get (yet) another dog. ;-)
  • Rich: We are still low on posts, so I will leave it at that and tell you to read all of them this week :)

Other Securosis Posts

Favorite Outside Posts

  • Mike Rothman: Don’t Get Old. I like a lot of the stuff Daniel Miessler writes. I don’t like the term ‘old’ in this case because that implies age. I think he is talking more about being ‘stuck’, which isn’t really a matter of age.
  • Rich: How an Agile Development Process Fits into the Security User Story. This is something I continue to struggle with as I dig deeper into Agile and DevOps. There is definitely room for more research into how to integrate security into user stories, and tying that to threat modeling. Maybe a project I should take up over the holidays.
  • Adrian Lane: Facebook, Google, and the Rise of Open Source Security Software. It’s interesting that Facebook is building this in-house. And contributing to the open source community. But remember they bought PrivateCore last year too. So the focus on examining in-memory processes and protecting memory indicates their feelings on security. Oh, and Rich is quoted in this too!

Research Reports and Presentations

Top News and Posts

—Rich