By Mike Rothman
As we have posted this Shadow Devices series, we have discussed the millions (likely billions) of new devices which will be connecting to networks over the coming decade. Clearly many of them won’t be traditional computer devices, which can be scanned and assessed for security issues. We called these other devices shadow devices because this is about more than the “Internet of Things” – any networked device which can be used to steal information – whether directly or by providing a stepping stone to targeted information – needs to be considered.
Our last post explained how peripherals, medical devices, and control systems can be attacked. We showed that although traditional malware attacks on traditional computing and mobile get most of the attention in IT security circles, these other devices shouldn’t be ignored. As with most things, it’s not a matter of if but when these lower-profile devices will be used to perpetrate a major attack.
So now what? How can you figure out your real attack surface, and then move to protect the systems and devices providing access to your critical data? It’s back to Security 101, which pretty much always starts with visibility, and then moves to control once you figure out what you have and how it is exposed.
Your first step is to shine a light into the ‘shadows’ on your network to gain a picture of all devices. You have a couple options to gain this visibility:
- Active Scanning: You can run a scan across your entire IP address space to find out what’s there. This can be a serious task for a large address space, consuming resources while you run your scanner(s). This process can only happen periodically, because it wouldn’t be wise to run a scanner continuously on internal networks. Keep in mind that some devices, especially ancient control systems, were not build with resilience in mind, so even a simple vulnerability scan can knock them over.
- Passive Monitoring: The other alternative is basically to listen for new devices by monitoring network traffic. This assumes that you have access to all traffic on all networks in your environment, and that new devices will communicate to something. Pitfalls of this approach include needing access to the entire network, and that new devices can spoof other network devices to evade detection. On the plus side, you won’t knock anything over by listening.
But we don’t see a question of either/or for gaining full visibility into all devices on the network. There is a time and place for active scanning, but care must be taken to not take brittle systems down or consume undue network resources. We have also seen many scenarios where passive monitoring is needed to find new devices quickly once they show up on the network.
Once you have full visibility, the next step is to identify devices. You can certainly look for indicators of what type of device you found during an active scan. This is harder when passively scanning, but devices can be identified by traffic patterns and other indicators within packets. A critical selection criteria for passive monitoring technology the vendor’s ability to identify the bulk of devices likely to show up on your network. Obviously in a very dynamic environment a fraction of devices cannot be identified through scanning or monitoring network traffic. But you want these devices to be a small minority, because anything you can’t identify through scanning requires manual intervention.
Once you know what kind of device you are dealing with, you need to start evaluating risk, a combination of the device’s vulnerability and exploitability. Vulnerability is a question of what could possibly happen. An attacker can do certain things with a printer which are impossible with an infusion pump, and vice-versa. So device type is key context. You also need to assess security vulnerabilities within the device. They may warrant an active scan upon identification for more granular information. As we warned above, be careful with active scanning to avoid risking device availability. You can glean some information about vulnerabilities through passive scanning, but it requires quite a bit more interpretation, and is subject to higher false positive rates.
Exploitability depends on the security controls and/or configurations already in place on the device. A warehouse picker robot may run embedded Windows XP, but if the robot also runs a whitelist malicious code cannot execute, so it might show up as vulnerable but not exploitable. The other main aspect of exploitability is attack path. If an external entity cannot access the warehouse system because it has no Internet-facing networks, even the vulnerable picker robot poses relatively little risk unless the physical location is attacked.
The final aspect of determining risk to a device is looking at what it has access to. If a device has no access to anything sensitive, then again it poses little risk. Of course that assumes your networks are sufficiently isolated. Determining risk is all about prioritization. You only have so many resources, so you need to choose what to fix wisely, and evaluating risk is the best way to allocate those scarce resources.
Once you know what’s out there in the shadows, your next step is to figure out whether and perhaps how to protect those devices. This again comes back to the risk profiles discussed above. It doesn’t make much sense to spend a lot of time and money protecting devices which don’t present much risk to the organization. But in case a device does present sufficient risk, how will you go about protecting it?
First things first: you should be making sure the device is configured in the most secure fashion. Yeah, yeah, that sounds trite and simple, but we mention it anyway because it’s shocking how many devices can be exploited due to open services that can easily be turned off. Once you have basic device hygiene taken care, here are some other ways to protect it:
- Active Controls: The first and most direct way to protect a shadow device is by implementing an active control on it. The available controls vary depending on kind of device and its embedded operating system. The most reliable means of protection is to stop execution of non-authorized programs. Yes, we are referring to whitelisting. There are commercial products available for embedded Windows devices, and a few options (some commercial, some open source) for other operating environments. There are also other defenses, including malware detection and even some forensic offerings, to really determine what’s happening on devices. Just keep in mind that active controls consume resources and can impair device stability. So you’ll want to exhaustively test any controls to be implemented on these devices to ensure you understand their impact.
- Network Isolation: These devices are on the network, which means traditional network security defenses can (and should) be used in conjunction with active controls. This involves using firewalls and/or routers & switches to provide a measure of isolation between these device networks and your data stores.
- Egress Filtering: We also recommend egress filtering on networks with these shadow devices. You can detect command and control, as well as remote access, by looking at outbound network traffic. Remember, these devices aren’t inherently malicious, so if something bad it happening it’s because an adversary is controlling the device, which means it’s communicating to a bot network or other controller outside your network, and that can be tracked.
An increasing limitation for network-oriented controls is encrypted traffic. We have published research on dealing with encrypted networks – there are ways to peek into encrypted traffic streams. When deciding between network and active controls for a devices, you need to consider encryption – network-based controls are easiest because they don’t require device interaction, but they may introduce blind spots, particularly thanks to encryption.
Finally we should mention the challenges of implementing fixes and workarounds – not only when dealing with shadow devices, but for all devices on networks now. There just isn’t enough staff to address all requirements, which means organizations need to think creatively about how to make limited staff more productive. One emerging way of magnifying staff efforts is automation. You have existing controls, which can be reconfigured based on specific rules. Of course automation of existing technologies requires integration with the existing security controls, but lately we have seen considerable innovation in this area, born out of necessity.
Our last step in figuring out your strategy to provide visibility and control over shadow devices is to determine which fixes can be automated, and establishing a strategy for that automation. We caution again that automation is a double-edged sword, which must be used carefully to ensure the ‘cure’ isn’t much worse than the symptoms. So ensure sufficient testing, and have reasonable expectations for what you can automate – in both the short and longer terms.
With that we wrap up the our series on “Shining a Light on Shadow Devices”. As we described, there will be an explosion of devices connecting to networks you are tasked to protect, and without sufficient visibility over what is connecting to them you are blind to potential attacks. There are a variety of ways to find these devices, and to implement controls protecting them. But you can’t protect what you can’t see, so make sure to shine a light on all the dark places in your networks so you are fully aware of your attack surface.
Posted at Monday 16th May 2016 8:49 pm
(1) Comments •
By Mike Rothman
In the SIEM Kung Fu paper, we tell you what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu).
We would like to thank Intel Security for licensing the content in this paper. Our unique Totally Transparent Research model allows us to do objective and useful research and still pay our bills, so we’re thankful to all of the companies that license our research.
Check out the page in the research library or download the paper directly (PDF).
Posted at Tuesday 10th May 2016 7:39 pm
(1) Comments •
By Adrian Lane
In 2015 we researched Putting Security Into DevOps, with a close look at how automated continuous deployment and DevOps impacted IT and application security. We found DevOps provided a very real path to improve application security using continuous automated testing, run each time new code was checked in. We were surprised to discover developers and IT teams taking a larger role in selecting security solutions, and bringing a new set of buying criteria to the table. Security products must do more than address application security issues; they need to mesh with continuous integration and deployment approaches, with automated capabilities and better integration with developer tools.
But the biggest surprise was that every team which had moved past Continuous Integration and onto Continuous Deployment (CD) or DevOps asked us about RASP, Runtime Application Self-Protection. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. We understand we had a small sample size, and the number of firms who have embraced either CD or DevOps application delivery is a very small subset of the larger market. But we found that once they started continuous deployment, each firm hit the same issues. The ability to automate security, the ability to test in pre-production, configuration skew between pre-production and production, and the ability for security products to identify where issues were detected in the code. In fact it was our DevOps research which placed RASP at the top of our research calendar, thanks to perceived synergies.
There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back office application that should never have been allowed on the Internet to begin with. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. So tools to protect against these attacks, which mesh well with the disruptive changes occuring in the development community, deserve a closer look.
Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities:
- Monitor and block application requests; in some cases they can alter request to strip malicious content
- Full functionality through RESTful APIs
- Integration with the application or application instance (virtualization)
- Unpack and inspect requests in the application, rather than at the network layer
- Detect whether an attack would succeed
- Pinpoint the module, and possibly the specific line of code, where a vulnerability resides
- Instrument application usage
These capabilities overlap with white box & black box scanners, web application firewalls (WAF), next-generation firewalls (NGFW), and even application intelligence platforms. And RASP can be used in coordination with any or all of those other security tools. So the question you may be asking yourself is “Why would we need another technology that does some of the same stuff?” It has to do with the way it is used and how it is integrated.
Differing Market Drivers
As RASP is a (relatively) new technology, current market drivers are tightly focused on addressing the security needs of one or two distinct buying centers. But RASP offers a distinct blend of capabilities and usability options which makes it unique in the market.
- Demand for security approaches focused on development, enabling pre-production and production application instances to provide real-time telemetry back to development tools
- Need for fully automated application security, deployed in tandem with new application code
- Technical debt, where essential applications contain known vulnerabilities which must be protected, either while defects are addressed or permanently if they cannot be fixed for any of various reasons
- Application security supporting development and operations teams who are not all security experts
The remainder of this series will go into more detail on RASP technology, use cases, and deployment options:
- Technical Overview: This post will discuss technical aspects of RASP products – including what the technology does; how it works; and how it integrates into libraries, runtime code, or web application services. We will discuss the various deployment models including on-premise, cloud, and hybrid. We will discuss some of the application platforms supported today, and how we expect the technology to evolve over the next couple years.
- Use Cases: Why and how firms use this technology, with medium and large firm use case profiles. We will consider advantages of this technology, as well as the problems firms are trying to solve using it – especially around deficiencies in code security and development processes. We will provide some detail on monitoring vs. blocking threats, and discuss applicability to security and compliance requirements.
- Deploying RASP: This post will focus on how to integrate RASP into a development and release management processes. We will also jump into a more detailed discussion of how RASP differs from adjacent technologies like WAF, NGFW, and IDS, as well as static and dynamic application testing. We will walk through the advantages of each technology, with a special focus on operational considerations and keeping detection/blocking up to date. We will discuss advantages and tradeoffs compared to other relevant security technologies. This post will close with an example of a development pipeline, and how RASP technology fits in.
- Buyers Guide: This is a new market segment, so we will offer a basic analysis process for buyers to evaluate products. We will consider integration with existing development processes and rule management.
Posted at Tuesday 10th May 2016 12:51 am
(0) Comments •
We have been getting questions on our training classes this year, so I thought I should update everyone on major updates to our ‘old’ class, and what to expect from our ‘advanced’ class. The short version is that we are adding new material to our basic class, to align with upcoming Cloud Security Alliance changes and cover DevOps. It will still include some advanced material, but we are assuming the top 10% (in terms of technical skills) of students will move to our new advanced class instead, enabling us to focus the basic class on the meaty part of the bell curve.
Over the past few years our Black Hat Cloud Security Hands On class became so popular that we kept adding instructors and seats to keep up with demand. Last year we sold out both classes and increased the size to 60 students, then still sold out the weekday class. That’s a lot of students, but the course is tightly structured with well-supported labs to ensure we can still provide a high-quality experience. We even added a bunch of self-paced advanced labs for people with stronger skills who wanted to move faster.
The problem with that structure is that it really limits how well we can support more advanced students. Especially because we get a much wider range of technical skills than we expected at a Black Hat branded training. Every year we get sizable contingents from both extremes: people who no longer use their technical skills (managers/auditors/etc.), and students actively working in technology with hands-on cloud experience. When we started this training 6 years ago, nearly none of our students had ever launched a cloud instance.
Self-paced labs work reasonably well, but don’t really let you dig in the same way as focused training. There are also many cloud major advances we simply cannot cover in a class which has to appeal to such a wide range of students. So this year we launched a new class (which has already sold out, and expanded), and are updating the main class. Here are some details, with guidance on which is likely to fit best:
Cloud Security Hands-On (CCSK-Plus) is our introductory 2-day class for those with a background in security, but who haven’t worked much in the cloud yet. It is fully aligned with the Cloud Security Alliance CCSK curriculum: this is where we test out new material and course designs to roll out throughout the rest of the CSA. This year we will use a mixed lecture/lab structure, instead of one day of lecture with labs the second day.
We have started introducing material to align with the impending CSA Guidance 4.0 release, which we are writing. We still need to align with the current exam, because the class includes a token to take the test for the certificate, but we also wrote the test, so we should be able to balance that. This class still includes extra advanced material (labs) not normally in the CSA training and the self-paced advanced labs. Time permitting, we will also add an intro to DevOps.
But if you are more advanced you should really take Advanced Cloud Security and Applied SecDevOps instead. This 2-day class assumes you already know all the technical content in the Hands-On class and are comfortable with basic administration skills, launching instances in AWS, and scripting or programming. I am working on the labs now, and they cover everything from setting up accounts and VPCs usable for production application deployments, building a continuous deployment pipeline and integrating security controls, integrating PaaS services like S3, SQS, and SNS, through security automation through coding (both serverless with Lambda functions and server-based).
If you don’t understand any of that, take the Hands-On class instead.
The advanced class is nearly all labs, and even most lecture will be whiteboards instead of slides. The labs aren’t as tightly scripted, and there is a lot more room to experiment (and thus more margin for error). They do, however, all interlock to build a semblance of a real production deployment with integrated security controls and automation. I was pretty excited when I figured out how to build them up and tie them together, instead of having everything come out of a bucket of unrelated tasks.
Hopefully that clears things up, and we look forward to seeing some of you in August.
Oh, and if you work for BIGCORP and can’t make it, we also provide private trainings these days.
Here are the signup links:
Posted at Monday 9th May 2016 7:44 pm
(0) Comments •
By Mike Rothman
What is the real risk of the Shadow Devices we described back in our first post? It is clear that more organizations don’t really take their risks seriously. They certainly don’t have workarounds in place, or proactively segment their environments to ensure that compromising these devices doesn’t provide opportunity for attackers to gain presence and a foothold in their environments. Let’s dig into three broad device categories to understand what attacks look like.
Do you remember how cool it was when the office printer got a WiFi connection? Suddenly you could put it wherever you wanted, preserving the Feng Shui of your office, instead of having it tethered to the network drop. And when the printer makers started calling their products image servers, not just printers? Yeah, that was when they started becoming more intelligent, and also tempting targets.
But what is the risk of taking over a printer? It turns out that even in our paperless offices of the future, organizations still print out some pretty sensitive stuff, and stuff they don’t want to keep may be scanned for storage/archival. Whether going in or out, sensitive content is hitting imaging servers. Many of them store the documents they print and scan until their memory (or embedded hard drive) is written over. So sensitive documents persist on devices, accessible to anyone with access to the device, either physical or remote.
Even better, many printers are vulnerable to common wireless attacks like the evil twin, where a fake device with a stronger wireless signal impersonates the real printer. So devices connect (and print) documents to the evil twin and not the real printer – the same attack works with routers too, but the risk is much broader. Nice. But that’s not all! The devices typically use some kind of stripped-down UNIX variant at the core, and many organizations don’t change the default passwords on their image servers, enabling attackers to trigger remote firmware updates and install compromised versions of the printer OS. Another attack vector is that these imaging devices now connect to cloud-based services to email documents, so they have all the plumbing to act as a spam relay.
Most printers use similar open source technologies to provide connectivity, so generic attacks generally work against a variety of manufacturers’ devices. These peripherals can be used to steal content, attack other devices, and provide a foothold inside your network perimeter. That makes these both direct and indirect targets.
These attacks aren’t just theoretical. We have seen printers hijacked to spread inflammatory propaganda on college campuses, and Chris Vickery showed proof of concept code to access a printer’s hard drive remotely.
Our last question is what kind of security controls run on imaging servers. The answer is: not much. To be fair, vendors have started looking at this more seriously, and were reasonably responsive in patching the attacks mentioned above. But that said, these products do not get the same scrutiny as other PC devices, or even some other connected devices we will discuss below. Imaging servers see relatively minimal security assessment before coming to market.
We aren’t just picking on printers here. Pretty much every intelligent peripheral is similarly vulnerable, because they all have operating systems and network stacks which can be attacked. It’s just that offices tend to have dozens of printers, which are frequently overlooked during risk assessment.
If printers and other peripherals seem like low-value targets, let’s discuss something a bit higher-value: medical devices. In our era of increasingly connected medical devices – including monitors, pumps, pacemakers, and pretty much everything else – there hasn’t been much focus on product security, except in the few cases where external pressure is applied by regulators. These devices either have IP network stacks or can be configured via Bluetooth – neither of which is particularly well protected.
The most disturbing attacks threaten patient health. There are all too many examples of security researchers compromising infusion and insulin pumps, jackpotting drug dispensaries, and even the legendary Barnaby Jack messing with a pacemaker. We know one large medical facility that took it upon itself to hack all its devices in use, and deliver a list of issues to the manufacturers. But there has been no public disclosure of results, or whether device manufacturers have made changes to make their devices safe.
Despite the very real risk of medical devices being targeted to attack patient health, we believe most of the current risk involves information. User data is much easier for attackers to monetize; medical profiles have a much longer shelf-life and much higher value than typical financial information. So ensuring that Protected Health Information is adequately protected remains a key concern in healthcare.
That means making sure there aren’t any leakages in these devices, which is not easy without a full penetration test. On the positive front, many of these devices have purpose-built operating systems, so they cannot really be used as pivot points for lateral movement within the network. Yet few have any embedded security controls to ensure data does not leak. Further complicating matters, some devices still use deprecated operating systems such as Windows XP and even Windows 2000 (yes, seriously), and outdated compliance mandates often mean they cannot be patched without recertification. So administrators often don’t update the devices, and hope for the best. We can all agree that hope isn’t a sufficient strategy.
With lives at stake, medical device makers are starting to talk about more proactive security testing. Similarly to the way a major SaaS breach could prove an existential threat to the SaaS market, medical device makers should understand what is at risk, especially in terms of liability, but that doesn’t mean they understand how to solve the problem. So the burden lands on customers to manage their medical device inventories, and ensure they are not misused to steal data or harm patients.
Industrial Control Systems
The last category of shadow devices we will consider is control systems. These devices range from SCADA systems running power grids, to warehousing systems ensuring the right merchandise is picked and shipped, to manufacturing systems running robotics, and heavy building machinery. All these devices are networked (whether directly or indirectly) in today’s advanced factories, so there is attack surface to monitor and protect.
We know these systems can be attacked. Stuxnet was a very advanced attack on nuclear centrifuges. Once within the nuclear facilities network, the adversaries compromised a number of different types of control systems to access centrifuges and break them. In a recent attack on a German blast furnace, the control systems were compromised and general failsafes were inoperable; the facility went offline while they cleaned the systems up.
In both cases, and likely many others that aren’t publicized, the adversaries are very advanced. They need to be – to attack a centrifuge like Stuxnet you need your own centrifuges to test on, and they aren’t exactly easy to find on eBay. You cannot just load a blast furnace into the pick-up Saturday morning for a pen-test.
That may comfort some people, but it shouldn’t. The implication is that control system defenders aren’t dealing with the Metasploit crowd, but instead trying to repel well-funded and capable adversaries. They need a very clear idea of what their attack surface looks like, and some way of monitoring their devices; they cannot rely on compliance mandates to require advanced security on their control systems.
Another consideration with control systems is the brittle nature of many of them. They are hard to test because you could bring down the system while trying to figure out whether it’s vulnerable. Most organizations don’t like that trade-off, so they don’t test directly. This means you need indirect techniques – definitely to figure out how vulnerable they are, and probably to discover and monitor them as well.
Which makes a good segue to our next post, where we will dig into two aspects of protecting shadow devices: Visibility and Control. First and foremost you need to figure out where these devices really are. Then you can worry about how to ensure they are not being attacked or misused.
Posted at Monday 9th May 2016 3:56 pm
(0) Comments •
It’s been a busy couple weeks, and the pace is only ramping up. This week I gave a presentation and a workshop at Interop. It seemed to go well, and the networking-focused audience was very receptive. Next week I’m out at the Rocky Mountain Infosec Conference, which is really just an excuse to spend a few more days back near my old home in Colorado. I get home just in time for my wife to take a trip, then even before she’s back I’m off to Atlanta to keynote an IBM Cybersecurity Seminar (free, if you are in the area). I’m kind of psyched for that one because it’s at the aquarium, and I’ve been begging Mike to take me for years.
Not that I’ve been to Atlanta in years.
Then some client gigs, and (hopefully) things will slow down a little until Black Hat. I’m updating our existing (now ‘basic’) cloud security class, and building the content for our Advanced Cloud Security and Applied SecDevOps class. It looks like it will be nearly all labs and whiteboarding, without too many lecture slides, which is how I prefer to learn.
This week’s stories are wide-ranging, and we are nearly at the end of our series highlighting continuous integration security testing tools. Please drop me a line if you think we should include commercial tools. We work with some of those companies, so I generally try to avoid specific product mentions. Just email.
You can subscribe to only the Friday Summary.
Top Posts for the Week
Tool of the Week
It’s time to finish off our series on integrating security testing tools into deployment pipelines with Mittn, which is maintained by F-Secure. Mittn is like Gauntlt and BDD-Security in that it wraps other security testing tools, allowing you to script automated tests into your CI server. Each of these tools defaults to a slightly different set of integrated security tools, and there’s no reason you can’t combine multiple tools in a build process.
Basically, when you define a series of tests in your build process, you tie one of these into your CI server as a plugin or use scripted execution. You pass in security tests using the template for your particular tool, and it runs your automated tests. You can even spin up a full virtual network environment to test just like production.
I am currently building this out myself, both for our training classes and our new securosis.com platform. For the most part it’s pretty straightforward… I have Jenkins pulling updates from Git, and am working on integrating Packer and Ansible to build new server images. Then I’ll mix in the security tests (probably using Gauntlt to start). It isn’t rocket science or magic, but it does take a little research and practice.
Securosis Blog Posts this Week
Other Securosis News and Quotes
Another quiet week.
Training and Events
Posted at Friday 6th May 2016 6:00 am
(0) Comments •
As part of updating All Things Securosis, the time has come to migrate our mailing lists to a new provider (MailChimp, for the curious). The CAPTCHA at our old provider wasn’t working properly, so people couldn’t sign up. I’m not sure if that’s technically irony for a security company, but it was certainly unfortunate. So…
If you weren’t expecting this, for some reason our old provider had you listed as active!! If so we are really sorry and please click on the unsubscribe at the bottom of the email (yes some of you are just reading this on the blog). We did our best to prune the list and only migrated active subscriptions (our lists were always self-subscribe to start), but the numbers look a little funny and let’s just say there is a reason we switched providers. Really, we don’t want to spam you, we hate spam, and if this shows up in your inbox and is unwanted, the unsubscribe link will work, and feel free to email us/reply directly. I’m hoping it’s only a few people who unsubscribed during the transition.
If you want to be added, we have two different lists – one for the Friday Summary (which is all cloud, security automation, and DevOps focused), and the Daily Digest of all emails sent the previous day. We only use these lists to send out email feeds from the blog, which is why I’m posting this on the site and not sending directly. We take our promises seriously and those lists are never shared/sold/whatever, and we don’t even send anything to them outside blog posts. Here are the signup forms:
Now if you received this in email, and sign up again, that’s very meta of you and some hipster is probably smugly proud.
Thanks for sticking with us, and hopefully we will have a shiny new website to go with our shiny new email system soon. But the problem with hiring designers that live in another state is flogging becomes logistically complex, and even the cookie bribes don’t work that well (especially since their office is, literally, right above a Ben and Jerry’s).
And again, apologies if you didn’t expect or want this in your inbox; we spent hours trying to pull only active subscribers and then clean everything up but I have to assume mistakes still happened.
Posted at Wednesday 4th May 2016 10:25 pm
(0) Comments •
In our wanderings we’ve noticed that when we pull our heads out of the bubble, not everyone necessarily understands what cloud is or where it’s going. Heck, many smart IT people are still framing it within the context of what they currently do. It’s only natural, especially when they get crappy advice from clueless consultants, but it certainly can lead you down some ugly paths. This week Mike, Adrian and Rich are also joined by Dave Lewis (who accidentally sat down next to Rich at a conference) to talk about how people see cloud, the gaps, and how to navigate the waters.
Watch or listen:
Posted at Tuesday 3rd May 2016 1:00 pm
(1) Comments •
Okay, have I mentioned how impatient I’m getting about updating our site? Alas, there is only so fast you can push a good design and implementation. The foundation is all set and we hope to start transferring everything into our new AWS architecture within the next month.
In the meantime I just pushed some new domain outlins for the Cloud Security Alliance Guidance into the GitHub repository for public feedback. I’m also starting to tie together the labs for our Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps training. I have this weird thing where I like labs to build up into a full stack that resembles something you might actually deploy. It works well, but takes a lot more time to piece together.
If you want to subscribe directly to the Friday Summary only list, just click here.
Top Posts for the Week
- This continues the huge legal problems due to pressures from U.S. law enforcement. It’s aligned with the Microsoft case in Ireland and the Apple vs. FBI issues here. Basically, it’s going to be very hard for U.S. tech companies to compete internationally if they can’t assure customers they meet local privacy and security laws: Microsoft sues US government over ‘unconstitutional’ cloud data searches
- This topic comes up a lot. One interesting thing I hadn’t seen before is the ability to inject activity into your AWS account so you can run a response test (slide 13). Let me know if this is possible on other cloud providers: Security Incident Response and Forensics on AWS
- Google Compute Platform racks up some more certifications. Normally I don’t cover each of these, but from time to time it’s worth highlighting that the major providers are very aggressive on their audits and certifications: Now playing: New ISO security and privacy certifications for Google Cloud Platform
- There are two papers linked on this Azure post on security and incident response. The IR one should be of particular interested to security pros: Microsoft Incident Response and shared responsibility for cloud computing
- An interview and transcript from some top-notch DevOps security pros: Rugged DevOps: Making Invisible Things Visible
- Zero trust is a concept that’s really starting to gain some ground. I know one client who literally doesn’t trust their own network and users need to VPN in even from the office, and all environments are compartmentalized. This is actually easier to do in the cloud vs. a traditional datacenter, especially if you use account segregation: Zero Trust Is a Key to DevOps Security.
- While it doesn’t look like anyone exploited this vulnerability, still not good, and Office365 is one of the most highly tested platforms out there. Office 365 Vulnerability Exposed Any Federated Account
- I keep bouncing around testing the different platforms. So far I like Ansible better for deployment pipelines, but Chef or Puppet for managing live assets. However, I don’t run much that isn’t immutable, so I thus don’t have a lot of experience running them at scale in production. If you have any opinions, please email me: Ansible vs Chef . Nothing interesting….
Tool of the Week
Two weeks ago I picked the Gauntlt security testing tool as the Tool of the Week. This week I’ll add to the collection with BDD-Security by ContinuumSecurity (it’s also available on GitHub).
BDD stands for “Behavior Driven Development”. It’s a programming concept outside of security that’s also used for application testing in general. Conceptually, you define a test as “given A when X then Y”. In security terms this could be, “given a user logs in, and it fails four times, then block the user”. BDD-Security supports these kinds of tests and includes both some internal assessment features as well as the ability to integrate external tools, including Nessus, similar to Gauntlt.
Here’s what it would look like directly from an Adobe blog post on the topic:
Scenario: Lock the user account out after 4 incorrect authentication attempts
Meta: @id auth_lockout
Given the default username from: users.table
And an incorrect password
And the user logs in from a fresh login page 4 times
When the default password is used from: users.table
And the user logs in from a fresh login page
Then the user is not logged in
These tools are designed to automate security testing into the development pipeline but have the added advantage of speaking to developers on their own terms. We aren’t hitting applications with some black box scanner from the outside that only security understands, we are integrating our tools in a familiar, accepted way, using a common language.
Securosis Blog Posts this Week
Other Securosis News and Quotes
Training and Events
- I’m keynoting a free seminar for IBM at the Georgia Aquarium on May 18th. I’ve been wanting to go there for years, so I scheduled a late flight out if you want to stalk me as I look at fish for the next few hours.
- I’m presenting a session and running a half-day program at Interop next week. Both are on cloud security.
- I’m also presenting at the Rocky Mountain Information Security Conference in Denver on May 11/12. Although I live in Phoenix these days Boulder is still my home town and I’m psyched anytime I get back there. Message me privately if you get in early and want to meet up.
- We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning:
Posted at Friday 29th April 2016 5:41 am
(0) Comments •
By Mike Rothman
I mentioned back in January that XX1 has gotten her driver’s permit and was in command of a two ton weapon on a regular basis. Driving with her has been, uh, interesting. I try to give her an opportunity to drive where possible, like when I have to get her to school in the morning. She can navigate the couple of miles through traffic on the way to her school. And she drives to/from her tutor as well, but that’s still largely local travel.
Though I do have to say, I don’t feel like I need to run as frequently because the 15-20 minutes in the car with her gets my heart racing for the entire trip. Obviously having been driving for over 30 years, I see things as they develop in front of me. She doesn’t. So I have to squelch the urge to say, “Watch that dude over there, he’s about to change lanes.” Or “That’s a red light and that means stop, right?” Or “Hit the f***ing brakes before you hit that car, building, child, etc.”
She only leveled a garbage bin once. Which caused more damage to her ego and confidence than it did to the car or the bin. So overall, it’s going well. But I’m not taking chances, and I want her to really understand how to drive. So I signed her up for the B.R.A.K.E.S. teen defensive driver training. Due to some scheduling complexity taking the class in New Jersey worked better. So we flew up last weekend and we stayed with my Dad on the Jersey Shore.
First, a little digression. When you have 3 kids with crazy schedules, you don’t get a lot of individual time with any of the kids. So it was great to spend the weekend with her and I definitely got a much greater appreciation for the person she is in this moment. As we were sitting on the plane, I glanced over and she seemed so big. So grown up. I got a little choked up as I had to acknowledge how quickly time is passing. I remember bringing her home from the hospital like it was yesterday. Then we were at a family event on Saturday night with some cousins by marriage that she doesn’t know very well. To see her interact with these folks and hold a conversation and be funny and engaging and cute. I was overwhelmed with pride watching her bring light to the situation.
But then it was back to business. First thing Sunday morning we went over the race track. They did the obligatory video to scare the crap out of the kids. The story of B.R.A.K.E.S. is a heartbreaking one. Doug Herbert, who is a professional drag racer, started the program after losing his two sons in a teen driving accident. So he travels around the country with a band of other professional drivers teaching teens how to handle the vehicle.
The statistics are shocking. Upwards of 80% of teens will get into an accident in their first 3 years of driving. 5,000 teen driving fatalities each year. And these kids get very little training before they are put behind the wheel to figure it out.
The drills for the kids are very cool. They practice accident avoidance and steering while panic breaking. They do a skid exercise to understand how to keep the car under control during a spin. They do slalom work to make sure they understand how far they can push the car and still maintain control. The parents even got to do some of the drills (which was very cool.) They also do a distracted driving drill, where the instructor messes with the kids to show them how dangerous it is to text and play with the radio when driving. They also have these very cool drunk goggles, which simulates your vision when under the influence. Hard to see how any of the kids would get behind the wheel drunk after trying to drive with those goggles on.
I can’t speak highly enough about the program. I let XX1 drive back from the airport and she navigated downtown Atlanta, a high traffic situation on a 7 lane highway, and was able to avoid an accident when a knucklehead slowed down to 30 on the highway trying to switch lanes to make an exit. Her comfort behind the wheel was totally different and her skills were clearly advanced in just the four hours. If you have an opportunity to attend with your teen, don’t think about it. Just do it. Here is the schedule of upcoming trainings, and you should sign up for their mailing list.
The training works. They have run 18,000 teens through the program and not one of them has had a fatal accident. That’s incredible. And important. Especially given my teen will be driving without me (or her Mom) in the car in 6 months. I want to tip the odds in my favor as much as I can.
Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.
We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).
The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.
Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.
Maximizing WAF Value
Resilient Cloud Network Architectures
Building a Vendor IT Risk Management Program
SIEM Kung Fu
Recently Published Papers
Incite 4 U
Blockchain demystified: So I’m having dinner with a very old friend of mine, who is one of the big wheels at a very big research firm. We started together decades ago as networking folks, but I went towards security and he went towards managing huge teams of people. One of his coverage areas now is industry research and specifically financials. So these new currencies, including BitCoin is near and dear to his heart. But he didn’t call it BitCoin, he said blockchain. I had no idea what he was talking about, but our pals at NetworkWorld put up a primer on blockchain. Regardless of whether it was a loudmouth Australian that came up with the technology, it’s going to become a factor in how transactions are validated over time and large tech companies and banks are playing around with it. Being security folks and tasked at some point with protecting things, we probably need to at least understand how it works. – MR
Luck Meets Perseverance: I’m not the first to say that being too early to a market and being dead wrong are virtually indistinguishable, certainly when the final tally is counted. When disruptive technologies emerge during tough economic times, visionaries teeter on oblivion. The recent Farnum Street post on The Creation of IBM’s Competitive Advantage has nothing to do with security. However, it is an excellent illustration of what it takes to succeed; a strong vision of the future, a drive to innovate, the fortitude do what others think is crazy, and enough cash to weather the storm for a while. If you’re not familiar with the story, it’s definitely worth your time. But this storyline remains relevant as dozens of product vendors struggle to sell the future of security to firms that are just trying to get through the day. We see many security startups with the first three attributes. That much is common. We don’t see the fourth attribute much at all, which is cash. Most innovation in technology is funded by VCs with the patience of a 5 year old who’s just eaten a bowl of Cap’n Crunch. IBM was lucky enough to sell off ineffectual businesses to build a war chest, then focused their time, effort and cash on the long term goal. That recipe never goes out of style, but with the common early stage technology funding model, we witness lots of very talented people crash and burn. – AL
Internet Kings Do Research: With ever increasing profits expected by companies (yes, it’s that free market thing), it seems that there isn’t a lot of commercially funded basic research. You know, like the kind Microsoft did back in the day? Stuff that didn’t necessarily help sell more Windows, rather helped to push forward the use of technology. MSFT had the profits to support that. And now it seems Google and Facebook do too. In fact, rock star researchers are moving from one to the other to drive these basic research initiatives. Most recently, Regina Dugan who spent time at DARPA working on military research before heading off to Google to lead their advanced tech lab, now is with Facebook to build a similar team. I think it’s awesome and I’m glad that every time I like something on Facebook it contributes to funding research that may change the world at some point. – MR
EU Data Protection Reform: The EU has approved a new data protection standard set to reform the standing 1995 rules. There are several documents published so it will take time for a thorough analysis, but there are a couple of things to like here. The right to be forgotten, the right to know when your data has been hacked, and the need for consent to process your data are excellent ideas conceptually. With storage virtually limitless, companies and governments have no incentive to forget you or take precautions on your behalf unless pushed. So this is a step in the right direction. But as usual, I’m skeptical at most of the proposal: There is no provision to extend protection to 3rd parties that access citizen’s data. There is no way to opt out of having your data shared amongst third parties or transparency when law enforcement goes rummaging around in your junk, which should be treated no differently than “being hacked”. In fact the EU seemed to go overboard to accommodate “law enforcement”, aka government access, without much oversight. Different day, same story. And without a way for someone to verify you’ve actually been ‘forgotten’ – and not just backed up to different servers – the clause is pretty much worthless. It’s a good next step, and hopefully we won’t wait another 20+ years for an update. – AL
Shyster pen testers: It’s inevitable. There aren’t enough security folks, and services shops are expected to grow. So what do you do? The bait and switch, of course. Have a high level, well regarded tester and then have a bunch of knucklehead kids do the actual test. Then write some total crap report and cash the check. As the Chief Monkey details, there are a lot of shysters in the business now. This is problematic for a lot of reasons. First, a customer may actually listen to the hogwash written in the findings report. You’ve got to check out the post to see some of the beauty’s in there. It also makes it hard for reputable firms, who won’t start dropping the price when a customer challenges their quals. But we have’t repealed the laws of economics, so as long as there is a gap in the number of folks to do the job, then snake oil salespeople will be the reality in the business. – MR
Posted at Wednesday 27th April 2016 6:45 pm
(0) Comments •
By Mike Rothman
As we mentioned last post, after you figure out what risk means to your organization, and determine the best way to quantify and rank your vendors in terms that concept of risk, you’ll need to revisit your risk assessment; because security in general, and each vendor’s environment specifically, is dynamic and constantly changing. We also need to address how to deal with vendor issues (breaches and otherwise) – both within your organization, and potentially to customers as well.
When keeping tabs on your vendors you need to decide how often to update your assessments of their security posture. In a perfect world you’d like a continuous view of each vendor’s environment, to help you understand your risk at all times. Of course continuous monitoring costs. So part of defining a V(IT)RM program is figuring out the frequency of assessment.
We believe vendors should not all be treated alike. The vendors in your critical risk tier (described in our last post) should be assessed as often as possible. Hopefully you’ll have a way (most likely through third-party services) of continually monitoring their Internet footprint, and alerting you when something changes adversely. We need to caveat that with a warning about real-time alerts. If you are not staffed to deal with real-time alerts, then getting them faster doesn’t help. In other words, if it takes you 3 days to work through your alert queue, getting an alert within an hour cannot reduce your risk much.
Vendors in less risky tiers can be assessed less frequently. An annual self-assessment and a quarterly scan might be enough for them. Again, this depends on your ability to deal with issues and verify answers. If you aren’t going to look at the results, forcing a vendor to update their self-assessment quarterly is just mean, so be honest with yourself when determining the frequency for assessments.
With assessment frequency determined by risk tier, what next? You’ll find adverse changes to the security posture of some vendors. The next step in the V(IT)RM program is to figure out how to deal with these issues.
You got an alert that there is an issue with a vendor, and you need to take action. But what actions can you take, considering the risk posed by the issue and the contractual agreement already in place? We cannot overstate the importance of defining acceptable actions contractually as part of your vendor onboarding process. A critical aspect of setting up and starting your program is ensuring your contracts with vendors support your desired actions when an issue arises.
So what can you do? This list is pretty consistent with most other security processes:
- Alert: At minimum you’ll want a line of communication open with the vendor to tell them you found an issue. This is no different than an escalation during an incident response. You’ll need to assemble the information you found, and package it up for the vendor to give them as much information as practical. But you need to balance how much time you’re willing to spend helping the vendor against everything else on your to do list.
- Quarantine: As an interim measure, until you can figure out what happened and your best course of action, you could quarantine the vendor. That could mean a lot of things. You might segment their traffic from the rest of your network. Or scrutinize each transaction coming from them. Or analyze all egress traffic to ensure no intellectual property is leaking. The point is that you’ll need time to figure out the best course of action, and putting the vendor in a proverbial penalty box can buy you that time. This is also contingent on being able to put a boundary around a specific vendor or service provider, which may not be possible, depending on what services they provide.
- Cut off: There is also the kill switch, which removes vendor access from your systems and likely ends the business relationship. This is a draconian action, but sometimes a vendor presents such risk, and/or doesn’t make the changes you require, so you may not have a choice. As mentioned above, you’ll need to make sure your contract supports this action. Unless you enjoy protracted litigation.
The latter two options impact the flow of business between your organization and the vendor, so you’ll need a process in place internally to determine if and when you quarantine and/or cut off a vendor. This escalation and action plan needs to be defined ahead of time. The rules of engagement, and the criteria to suspend or end a business relationship due to IT risk, need to be established ahead of time. Defined escalations ensure the internal stakeholders are in the loop as you consider flipping the kill switch.
A good rule of thumb is that you don’t want to surprise anyone when a vendor goes into quarantine or is cut off from your systems. If the business decision is made to keep the vendor active in your systems (a decision made well above your pay grade), at least you’ll have documentation that the risk was accepted by the business owner.
Once the action plan is defined, documented, and agreed upon, you’ll want to build a communication plan. That includes defining when you’ll notify the vendor and when you’ll communicate the issue internally. As part of the vendor onboarding process you need to define points of contact with the vendor. Do they have a security team you should interface with? Is it their business operations group? You need to know before you run into an issue.
You’ll also want to make sure to have an internal discussion about how much you will support the vendor as they work through any issues you find. If the vendor has an immature security team and/or program, you can easily end up doing a lot of work for them. And it’s not like you have a bunch of time to do someone else’s work, right?
Of course business owners may be unsympathetic to your plight when their key vendor is cut off. That’s why organizational buy-in for criteria for quarantining or cutting a vendor off is critical. The last thing you as a security professional want is to be in a firefight with a business leader over a key vendor. Establish your criteria, and manage to them. If you are overruled and the decision is to make an exception, you can’t do much about that. But at least you will be on record that the decision goes against the policies established within your vendor risk management program.
If you have enough vendors you will run into a situation where your vendor suffers a public breach. You will need to factor that specifically into your program, because you may have a responsibility to disclose the third-party breach to your customers as well. First things first: the vendor breach shouldn’t be a surprise to you. A vendor should proactively call their customers when they are breached and customers are at risk. But this is the real world, so we cannot afford to count on them acting correctly. What then?
This comes back to your organization’s Incident Response playbook. You have that documented, right? As described in our Incident Response Fundamentals research, you need to size up the issue, build a team, assess the damage, and then move to contain it. Of course this is a bit different for vendors, because there is a lot you don’t know about their systems. And depending on the vendor’s sophistication, they might not know either.
So (as always) internal communication and keeping senior management appraised of the situation are critical. You need to stay in close contact with the vendor, constantly assess your level of exposure, and decide if and when you need to disclose – to your board, audit committee, and possibly customers.
Also, as described in Our IR fundamentals research, make sure to work through a post-mortem with the vendor to make sure they learned from the experience. If you aren’t satisfied that it won’t happen again, perhaps you need to escalate to the business managers on your side, to reevaluate the relationship in light of the additional risk in doing business with them. Additionally, use this as an opportunity to refine your own process for next time a vendor gets popped.
Posted at Monday 25th April 2016 2:34 pm
(0) Comments •
By Mike Rothman
As we discussed in the first post in this series, whether it’s thanks to increasingly tighter business processes/operations with vendors andtrading partners, or to regulation (especially in finance) you can no longer ignore vendor risk management. So we delved into the structure and mapped out a few key aspects of a VRM program. Of course we are focused on the IT aspects of vendor management, which should be a significant component of a broader risk management approach for your environment.
But that begs the question of how you can actually evaluate the risks of a vendor. What should you be worried about, and how can you gather enough information to make an objective judgement of the risk posed by every vendor? So that’s what we’ll explore in this post.
Risk in the Eye of the Beholder
The first aspect of evaluating vendor risk is actually defining what that risk means to your organization. Yes, that seems self-evident, but you’d be surprised how many organizations don’t document or get agreement on what presents vendor risk, and then wonder why their risk management programs never get anywhere. Sigh.
All the same, as mentioned above, vendor (IT) risk is a component of a larger enterprise risk management program. So first establish the risks of working with vendors. Those risks can be broken up into a variety of buckets, including:
Financial: This is about the viability of your vendors. Obviously this isn’t something you can control from an IT perspective, but if a key vendor goes belly up, that’s a bad day for your organization. So this needs to be factored in at the enterprise level, as well as considered from an IT perspective – especially as cloud services and SaaS proliferate. If your Database as a Service vendor (or any key service provider) goes away, for whatever reason, that presents risk to your organization.
Operational: You contract with vendors to do something for your organization. What is the risk if they cannot meet those commitments? Or if they violate service level agreements? Again it is enterprise-level risk of the organization, but it also peeks down into the IT world. Do you pack up shop and go somewhere else if your vendor’s service is down for a day? Are your applications and/or infrastructure portable enough to even do that?
Security: As security professionals this is our happy place. Or unhappy place, depending on how you feel about the challenges of securing much of anything nowadays. This gets to the risk of a vendor being hacked and losing your key data, impacting availability of your services, and/or allowing an adversary to jump access your networks and systems.
Within those buckets, there are probably a hundred different aspects that present risk to your organization. After defining those buckets of risk, you need to dig into the next level and figure out not just what presents risk, but also how to evaluate and quantify that risk. What data do you need to evaluate the financial viability of a vendor? How can you assess the operational competency of vendors? And finally, what can you do to stay on top of the security risk presented by vendors? We aren’t going to tackle financial or operational risk categories, but we’ll dig into the IT security aspects below.
The first hoop most vendors have to jump through is self-assessment. As a vendor to a number of larger organizations, we are very familiar with the huge Excel spreadsheet or web app built to assess our security controls. Most of the questions revolve around your organization’s policies, controls, response, and remediation capabilities.
The path of least resistance for this self-assessment is usually a list of standard controls. Many organizations start with ISO 27002, COBIT, and PCI-DSS. Understand relevance is key here. For example, if a vendor is only providing your organization with nuts and bolts, their email doesn’t present a very significant risk. So you likely want a separate self-assessment tool for each risk category, as we’ll discuss below.
It’s pretty easy to lie on a spreadsheet or web application. And vendors do exactly that. But you don’t have the resources to check everything, so there is a measure of trust, but verify that your need to apply here. Just remember that it’s resource-intensive to evaluate every answer, so focus on what’s important, based on the risk definitions above.
Just a few years ago, if you wanted to assess the security risk of a vendor, you needed to either have an on-site visit or pay for a penetration test to really see what an attacker could do to partners. That required a lot of negotiation and coordination with the vendor, which meant it could only be used for your most critical vendors. And half the time they’d tell you to go pound sand, pointing to the extensive self-assessment you forced them to fill out.
But now, with the introduction of external threat intelligence services, and techniques you can implement yourself, you can get a sense of what kind of security mess your vendors actually are. Here are a few types of relevant data sources:
Botnets: Botnets are public by definition, because they use compromised devices to communicate with each other. So if a botnet is penetrated you can see who is connecting to it at the time, and get a pretty good idea of which organizations have compromised devices. That’s exactly how a number of services tell you that certain networks are compromised without ever looking at the networks in question.
Spam: If you have a network that is blasting out a bunch of spam, that indicatives an issue. It’s straightforward to set up a number of dummy email accounts to gather spam, and see which networks are used to blast millions of messages a day. If a vendor owns one of those networks, that’s a disheartening indication of their security prowess.
Stolen credentials: There are a bunch of forums where stolen credentials are traded, and if a specific vendor shows up with tons of their accounts and passwords for sale, that means their security probably leaves a bit to be desired.
Malware distribution/infected hosts: Another indication of security failure is Internet-facing devices which are compromised and then used to either host phishing sites or distribute malware, or both. If a vendor’s Internet site is infected and distributing malware, they likely have no idea what they are doing.
Public Breaches: We’ll discuss this later, but if your vendor is a public company or deals with consumers, they have to disclose breaches to their customers. Although you’ve likely gotten kind of numb to yet another breach notification, if it mentions a key vendor, that’s a concern. We’ll discuss what to do when a vendor is breached later in this series.
Security Best Practices: There are also other tells that a vendor knows a bit about security. Do they encrypt all traffic to/from their public sites? Do they authenticate their email with technologies like SPF or DKIM? Do they use secure DNS requests? To be clear, these aren’t conclusive indicators, but they can certainly give you a clue to how serious a vendor is about security.
So how do you gather all of this information? You can certainly do it yourself. Set up a bunch of honeypots and dedicate some internal resources to mining through the data. If you have tens of thousands of vendors and are heavily regulated, you might do exactly this. Otherwise you’ll likely rely on an external information provider to perform this analysis for you.
We covered some aspects of these services in our Ecosystem Risk Management Paper, but we’ll quickly summarize here. You need to figure out if you are looking for this vendor to provide a score and ranking of your other vendors, or whether you want the raw data on which vendors have issues (whether with botnets, malware distribution, etc.) to perform your own analysis and draw your own conclusions.
To make this kind of program feasible, without requiring another 25 bodies, let’s discuss risk tiering. Larger organizations may have thousands of vendors. It’s hard to consistently perform deep risk analysis of every vendor you do business with. But you also cannot afford only a cursory look at a few key vendors which present significant risk. So you can tier different vendors into separate risk tiers.
We’re simple folks, so we find any more than 3 or 4 tiers unwieldy. One of your first actions, after defining your IT security risk, is to nail down and build consensus on how to tier vendors by risk. Then your analyses and assessments will be based on the risk tier, and not some arbitrary decision on how deeply to look at each vendor. You could use tiers such as: critical, important, and basic. You could call them “unimportant” vendors but that might damage their self-esteem. The names don’t matter – you just need a set of tiers to group them between.
Critical vendors will get the full boat. A self-assessment, a means to externally evaluate their security posture, and possibly a site visit. You’ll scrutinize their self-assessments and have alerts triggered when something changes with them. We’ll go into what options you have to deal with vendor risk our the next post, but for now suffice it to say you’ll be all over your critical vendors, to make sure they are secure and have a plan to address any deficiencies.
Important vendors may warrant a cursory look at the self-assessment and the external evaluation. The security bar might need to be lower for these folks, because they present less risk to your organization. Basic vendors send in their self-assessment, and maybe you perform a simple vulnerability scan on their external web properties, just to check some box on an auditor’s checklist. That’s about all you’ll have time for with these folks.
Could you have more risk tiers? Absolutely. But the amount of work increases exponentially with each additional tier. That’s why we favor only using a handful, knowing that from a risk management standpoint the best bang for your buck will be from focusing on your critical vendors.
Tracking over Time
Obviously security is highly dynamic, so what is secure today might not be tomorrow. Likewise, what was a mess a month ago may not be so bad right now. Yet most vendor risk assessments provide a single point-in-time view, representing what things looked like at that moment. Such an assessment has limited value, because every organization can have the proverbial bad day, and inevitably some data sources provide false positives.
You want to evaluate each partner over time, and track their progress. Have they shown less infected hosts over time? How long does it take them to remediate devices participating in botnets? Has a vendor that traditionally did security well, suddenly started blasting spam and joining a bunch of botnets? Part of defining your vendor (IT) risk management program is figuring out which of the quantitative risk metrics most closely represent real risk to your organization, and need to be tracked and managed over time.
Alerting is also a key aspect of executing on the program. If a critical vendor shows up on a botnet, do you drop everything and address it with the partner? That question provides a nice segue to our next post, which will discuss ongoing monitoring and communication for your vendor (IT) risk management program.
Posted at Friday 22nd April 2016 5:46 pm
(1) Comments •
By Adrian Lane
Starting with the 2008 RSA conference, Rich and Chris Hoff presented each year on the then-current state of cloud services, and predicted where they felt cloud computing was going. This year Mike helped Rich conclude the series with some new predictions, but more importantly they went back to assess the accuracy of previous prognostications. My takeaway is that their predictions for what cloud services would do, and the value they would provide, were pretty well spot on. And in most cases, when a specific tool or product was identified as being critical, they totally missed the mark. Wildly.
Trying to follow cloud services – Azure, AWS, and GCP, to name a few – is like running toward a rapidly-expanding explosion blast radius. Capabilities grow so fast in depth and breadth that you cannot keep pace. Part of our latest change to the Friday Summary is a weekly tools discussion to help you with the best of what we see. And ask IT folks or any development team: tools are a critical part of getting the job done. Adoption rates of Docker show how critical tools are to productivity. Keep in mind that we are not making predictions here – we are keenly aware that we don’t know what tools will be used a year from now, much less three. But we do know many old-school platforms don’t work in more agile situations. That’s why it’s essential to share some of the cool stuff we see, so you are aware of what’s available, and can be more productive.
You can subscribe to only the Friday Summary.
Top Posts for the Week
Tool of the Week
This week’s tool is DynamoDB, a NoSQL database service native to Amazon AWS. A couple
years ago when we were architecting our own services on Amazon, we began with comparing RDS and PostgreSQL, and even considered MySQL. After some initial prototyping work we realized that a relational – or quasi-relational, for some MySQL variants – platform really did not offer us any advantages but came some limitations, most notably around the use of JSON data elements and the need to store very small records very quickly. We settled on DynamoDB. Call it confirmation bias, but the more we used it the more we appreciated its capabilities, especially around security.
While a full discussion of DynamoDB’s security capabilities – much less a full platform review – is way beyond the scope of this post, the following are some security features we found attractive:
- Query Filters: The ability to use filter expressions to simulate masking and label-based controls over what data is returned to the user. The Dynamo query can be consistent for all users, but filter return values after it is run. Filters can work by comparing elements returned from the query, but there is nothing stopping you from filtering on other values associated with the user running the query. And like with relational platforms, you can built your own functions to modify data or behavior depending upon the results.
- Fine-grained Authorization: Access polices can be used to restrict access to specific functions, and limit the impact of those functions to specific tables by user. As with query filters, you can limit results a specific user sees, or take into account things like geolocation to dynamically alter which data a user can access, using a set of access policies defined outside the application.
- IAM: The AWS platform offers a solid set of identity and access management capabilities, and group/role based authorization as you’d expect, which all integrates with Dynamo. But it also offers temporary credentials for delegation of roles. The also offer hooks to leverage Google and Facebook identities to cotnrol mobile access to the database. Again, we can choose to punt on some IAM responsibilities for speed of deployment, or we can choose to ratchet down access, depending on how the user authenticated.
- JSON Support: You can both store and query JSON documents in DynamoDB. Programmers reading this will understand why this is important, as it enables you to store everything from data to code snippets, in a semi-structured way. For security or DevOps folks, consider storing different versions of initialization and configuration scripts for different AMIs, or a list of defining characteristics (e.g., known email addresses, MAC addresses, IP addresses, mobile device IDs, corporate and personal accounts, etc.) for specific users.
- Logging: As with most NoSQL platforms, logging is integrated. You can set what to log conditionally, and even alert based on log conditions. We are just now prototyping log parsing with Lambda functions, streaming the results to S3 for cheap storage. This should be an easy way to enrich as well.
DynamoDB has security feature parity with relational databases. As someone who has used relational platforms for the better part of 20 years, I can say that most of the features I want are available in one form or another on the major relational platforms, or can be simulated. The real difference is the ease and speed at which I can leverage these functions.
Securosis Blog Posts this Week
Other Securosis News and Quotes
Training and Events
- We are running two classes at Black Hat USA:
Posted at Friday 22nd April 2016 5:30 am
(0) Comments •
Last Friday I spent some time in a discussion with senior members of Apple’s engineering and security teams. I knew most of the technical content but they really clarified Apple’s security approach, much of which they have never explicitly stated, even on background. Most of that is fodder for my next post, but I wanted to focus on one particular technical feature I have never seen clearly documented before; which both highlights Apple’s approach to security, and shows that iMessage is more secure than I thought.
It turns out you can’t add devices to an iCloud account without triggering an alert because that analysis happens on your device, and doesn’t rely (totally) on a push notification from the server. Apple put the security logic in each device, even though the system still needs a central authority. Basically, they designed the system to not trust them.
iMessage is one of the more highly-rated secure messaging systems available to consumers, at least according to the Electronic Frontier Foundation. That doesn’t mean it’s perfect or without flaws, but it is an extremely secure system, especially when you consider that its security is basically invisible to end users (who just use it like any other unencrypted text messaging system) and in active use on something like a billion devices.
I won’t dig into the deep details of iMessage (which you can read about in Apple’s iOS Security Guide), and I highly recommend you look at a recent research paper by Matthew Green and associates at Johns Hopkins University which exposed some design flaws.
Here’s a simplified overview of how iMessage security works:
- Each device tied to your iCloud account generates its own public/private key pair, and sends the public key to an Apple directory server. The private key never leaves the device, and is protected by the device’s Data Protection encryption scheme (the one getting all the attention lately).
- When you send an iMessage, your device checks Apple’s directory server for the public keys of all the recipients (across all their devices) based on their Apple ID (iCloud user ID) and phone number.
- Your phone encrypts a copy of the message to each recipient device, using its public key. I currently have five or six devices tied to my iCloud account, which means if you send me a message, your phone actually creates five or six copies, each encrypted with the public key for one device.
- For you non-security readers, a public/private keypair means that if you encrypt something with the public key, it can only be decrypted with the private key (and vice-versa). I never share my private key, so I can make my public key… very public. Then people can encrypt things which only I can read using my public key, knowing nobody else has my private keys.
- Apple’s Push Notification Service (APN) then sends each message to its destination device.
- If you have multiple devices, you also encrypt and send copies to all your own devices, so each shows what you sent in the thread.
This is a simplification but it means:
- Every message is encrypted from end to end.
- Messages are encrypted using keys tied to your devices, which cannot be removed (okay, there is probably a way, especially on Macs, but not easily).
- Messages are encrypted multiple times, for each destination device belonging to the recipients (and sender), so private keys are never shared between devices.
There is actually a lot more going on, with multiple encryption and signing operations, but that’s the core. According to that Johns Hopkins paper there are exploitable weaknesses in the system (the known ones are patched), but nothing trivial, and Apple continues to harden things. Keep in mind that Apple focuses on protecting us from criminals rather than governments (despite current events). It’s just that at some point those two priorities always converge due to the nature of security.
It turns out that one obvious weakness I have seen mentioned in some blog posts and presentations isn’t actually a weakness at all, thanks to a design decision.
iMessage is a centralized system with a central directory server. If someone could compromise that server, they could add “phantom devices” to tap conversations (or completely reroute them to a new destination). To limit this Apple sends you a notification every time a device is added to your iCloud account.
I always thought Apple’s server detected a new entry and then pushed out a notification, which would mean that if they were deeply compromised (okay, forced by a government) to alter their system, the notification could be faked, but that isn’t how it works. Your device checks its own registry of keys, and pops up an alert if it sees a new one tied to your account.
According to the Johns Hopkins paper, they managed to block the push notifications on a local network which prevented checking the directory and creating the alert. That’s easy to fix, and I expect a fix in a future update (but I have no confirmation).
Once in place that will make it impossible to place a ‘tap’ using a phantom device without at least someone in the conversation receiving an alert. The way the current system works, you also cannot add a phantom recipient because your own devices keep checking for new recipients on your account.
Both those could change if Apple were, say, forced to change their fundamental architecture and code on both the server and device sides. This isn’t something criminals could do, and under current law (CALEA) the US government cannot force Apple to make this kind of change because it involves fundamental changes to the operation of the system.
That is a design decision I like. Apple could have easily decided to push the notifications from the server, and used that as the root authority for both keys and registered devices, but instead they chose to have the devices themselves detect new devices based on new key registrations (which is why the alerts pop up on everything you own when you add or re-add a device). This balances the need for a central authority (to keep the system usable) against security and privacy by putting the logic in the hardware in your pocket (or desk, or tote bag, or… whatever).
I believe FaceTime uses a similar mechanism. iCloud Keychain and Keychain Backup use a different but similar mechanism that relies as much as possible on your device. The rest of iCloud is far more open, but I also expect that to change over time.
Overall it’s a solid balance of convenience and security. Especially when you consider there are a billion Apple devices out there. iMessage doesn’t eliminate the need for true zero-knowledge messaging systems, but it is extremely secure, especially when you consider that it’s basically a transparent replacement for text messaging.
Posted at Tuesday 19th April 2016 7:37 pm
(3) Comments •
Mike, Adrian, and I are just back from a big planning session for what we are calling “Securosis 2.0”. Everything is lining up nicely, and now we mostly just need to get the website updated. We are fully gutting the current design and architecture, and moving everything into AWS. The prototyping is complete and next week I get to build out the deployment pipeline, because we are going with a completely immutable design.
One nice twist is that the public side is all read-only, while we have a totally different infrastructure for the admin side. Both share a common database (MariaDB on RDS) and file store (S3). We are estimating about a 10X cost savings compared to our current high-security hosting. As we get closer I’ll start sharing more implementation tips based on our experience. This is quite different from our Trinity platform, which is completely bespoke, whereas in this case we have to work with an existing content management system and wrangle it into a cloud native deployment.
If you want to subscribe directly to the Friday Summary only list, just click here.
Top Posts for the Week
Tool of the Week
Last week we set the stage with Jenkins and I hinted that this week we would start on some security-specific tools. It’s time to talk about Gauntlt, one of the best ways to integrate testing into your deployment pipeline. It is a must-have addition to any continuous deployment/delivery process.
Gauntlt allows you to hook your security tools into your pipeline for automated testing. For example you can define a simple test to find all open ports using
nmap, then match those ports to the approved list for that particular application component/server. If it fails the test you can fail the build and send the details back to your issue tracker for the relevant developer or admin to fix. Attacks (tests) are written in an easy-to-parse format.
It’s an extremely powerful way to integrate automated security testing into the development and deployment process, using the same tools and hooks as development and operations themselves.
Securosis Blog Posts this Week
We were all out this week for our planning session, so no posts.
Other Securosis News and Quotes
Training and Events
- We are running two classes at Black Hat USA:
Posted at Friday 15th April 2016 5:55 am
(0) Comments •