By Mike Rothman
A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.
In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.
I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.
Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.
Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.
Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.
Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich
Photo credit: “Courage” from bfick
It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.
The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.
Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.
SIEM Kung Fu
Building a Threat Intelligence Program
Recently Published Papers
Incite 4 U
Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR
Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL
Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR
Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL
Posted at Wednesday 3rd February 2016 12:40 pm
(0) Comments •
By Adrian Lane
It has been a while since we had an acquisition in the database security space, but today Trustwave announced it acquired Application Security Inc. – commonly called “AppSec” by many who know the company.
About 10 years ago I wrote my first competitive analysis paper during my employment with IPLocks, of our principal competitor: another little-known database security company called Application Security, Inc. Every quarter for four years, I updated those competitive analysis sheets to keep pace with AppSec’s product enhancements and competitive tactics in sales engagements. Little did I know I would continue to examine AppSec’s capabilities on a quarterly basis after having joined Securosis – but rather than solely looking at competitive positioning, I have been gearing my analysis toward how features map to the customer inquires, and tracking consumer experiences during proof-of-concept engagements. Of all the products I have tracked, I have been following AppSec the longest. It feels odd to be writing this for a general audience, but this deal is pretty straightforward, and it needed to happen.
Application Security was one of the first database security vendors, and while they were considered a leader in the 2004 timeframe, their products have not been competitive for several years. AppSec still has one of the best database assessment products on the market (
dbProtectAppDetectivePRO), and one of the better – possibly the best – database vulnerability research team backing it. But Database Activity Monitoring (DAM) is now the key driver in that space, and AppSec’s DAM product ( AppDetectivePROdbProtect) has not kept pace with customer demand in terms of performance, integration, ease-of-use, or out-of-the-box functionality. A “blinders on” focus can be both admirable and necessary for very small start-ups to deliver innovative technologies to markets that don’t understand their new technology or value proposition, but as markets mature vendors must respond to customers and competitors. In AppSec’s early days, very few people understood why database security was important. But while the rest of the industry matured and worked to build enterprise-worthy solutions, AppSec turned a deaf ear to criticism from would-be customers and analysts. Today the platform has reasonable quality, but is not much more than an ‘also-ran’ in a very competitive field.
That said, I think this is a very good purchase for Trustwave. It means several things for Trustwave customers:
- Trustwave has filled a compliance gap in its product portfolio – specifically for PCI. Trustwave is focused on PCI-DSS, and data and database security are central to PCI compliance. Web and network security have been part of their product suite, but database security has not. Keep in mind that DAM and assessment are not specifically prescribed for PCI compliance like WAF is; but the vast majority of customers I speak with use DAM to audit activity, discovery to show what data stores are being used, and assessment to prove that security controls are in place. Trustwave should have acquired this technology a while ago.
- The acquisition fits Trustwave’s model of buying decent technology companies at low prices, then selling a subset of their technology to existing customers where they already know demand exists. That could explain why they waited so long – balancing customer requirements against their ability to negotiate a bargain price. Trustwave knows what their customers need to pass PCI better than anyone else, so they will succeed with this technology in ways AppSec never could.
- This puts Trustwave on a more even footing for customers who care more about security and don’t just need to check a compliance box, and gives Trustwave a partial response to Imperva’s monitoring and WAF capabilities.
- I think Trustwave is correct that AppSec’s platform can help with their managed services offering – Monitoring and Assessment as a Service appeals to smaller enterprises and mid-market firms who don’t want to own or manage database security platforms.
What does this mean for AppSec customers? It is difficult to say – I have not spoken with anyone from Trustwave about this acquisition, and I am unable to judge their commitment to putting engineering effort behind the AppSec products. And I cannot tell whether they intend to keep the research team which has been keeping the assessment component current. Trustwave tweeted during the official announcement that “.@Trustwave will continue to develop and support @AppSecInc products, DbProtect and AppDetectivePRO”, but that could be limited to features compliance buyers demand, without closing the performance and data collection gaps that are problematic for DAM customers. I will blog more on this as I get more information, but expect them to provide what’s required to meet compliance and no more.
And lastly, for those keeping score at home, AppSec is the 7th Database Activity Monitoring acquisition – after Lumigent (BeyondTrust), IPLocks (Fortinet), Guardium (IBM), Secerno (Oracle), Tizor (IBM via Netezza), and Sentrigo (McAfee); leaving Imperva and GreenSQL as the last independent DAM vendors.
Posted at Monday 11th November 2013 9:06 pm
(2) Comments •
By Adrian Lane
We are pleased to announce the availability of a new research paper, Understanding and Selecting Database Security Platforms. And this paper covers most of the facets for database security today. We started to refresh our original Database Activity Monitoring paper in October 2011, but stopped short when our research showed that platform evolution has stopped converging – and has instead diverged again to embrace independent visions of database security, and splintering customer requirements. We decided our original DAM research was becoming obsolete. Use cases have evolved and vendors have added dozens of new capabilities – they have covered the majority of database security requirements, and expanded out into other areas.
These changes are so significant that we needed to seriously revisit our use cases and market drivers, and delve into the different ways preventative and detective data security technologies have been bundled with DAM to create far more comprehensive solutions. We have worked hard to fairly represent the different visions of how database security fits within enterprise IT, and to show the different value propositions offered by these variations. These fundamental changes have altered the technical makeup of products so much that we needed new vocabulary to describe these products. The new paper is called “Understanding and Selecting Database Security Platforms” (DSP) to reflect these major product and market changes.
We want to thank our sponsors for the Database Security Platform paper: Application Security Inc, GreenSQL, Imperva, and McAfee. Without sponsors we would not be able to provide our research for free, so we appreciate deeply that several vendors chose to participate in this effort and endorse our research positions.
You can download the DSP paper.
Posted at Wednesday 30th May 2012 3:01 pm
(0) Comments •
By Adrian Lane
In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories.
In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use.
Analysis and Protection
- Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the
WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked.
- Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems.
- Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior.
- File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than
DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution.
- Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the
FROM clause. Queries that include the highly suspect
WHERE clause may simply return the value
1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash.
- Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level.
- Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases communicating on standard database ports. Discovery tools may snapshot all current databases or alert admins when new undocumented databases appear. In some cases they can automatically initiate a vulnerability scan.
- Content Discovery: As much as we like to think we know our databases, we don’t always know what’s inside them. DSP solutions offer content discovery features to identify the use of things like Social Security numbers, even if they aren’t located where you expect. Discovery tools crawl through registered databases, looking for content and metadata that match policies, and generate alerts for sensitive content in unapproved locations. For example, you could create a policy to identify credit card numbers in any database and generate a report for PCI compliance. The tools can run on a scheduled basis so you can perform ongoing assessments, rather than combing through everything by hand every time an auditor comes knocking. Most start with a scan of column and table metadata, then follow with an analysis of the first n rows of each table, rather than trying to scan everything.
- Dynamic Content Analysis: Some tools allow you to act on the discovery results. Instead of manually identifying every field with Social Security numbers and building a different protection policy for each location, you create a single policy that alerts every time an administrator runs a
SELECT query on any field discovered to contain one or more SSNs. As systems grow and change over time, the discovery continually identifies fields containing protected content and automatically applies the policy. We are also seeing DSP tools that monitor the results of live queries for sensitive data. Policies are then freed from being tied to specific fields, and can generate alerts or perform enforcement actions based on the result set. For example, a policy could generate an alert any time a query result contains a credit card number, no matter what columns were referenced in the query.
Next we will discuss administration and policy management for DSP.
Posted at Wednesday 4th April 2012 10:30 pm
(1) Comments •
By Adrian Lane
In our previous post on DSP components we outlined the evolution of Database Activity Monitoring into Database Security Platforms. One of its central aspects is the evolution of event collection mechanisms from native audit, to monitoring network activity, to agent-based activity monitoring. These are all database-specific information sources. The evolution of DAM has been framed by these different methods of data collection. That’s important, because what you can do is highly dependent on the data you can collect. For example, the big reason agents are the dominant collection model is that you need them to monitor administrators – network monitoring can’t do that (and is quite difficult in distributed environments).
The development of DAM into DSP also entails examination of a broader set of application-related events. By augmenting the data collection agents we can examine other applications in addition to databases – even including file activity. This means that it has become possible to monitor SAP and Oracle application events – in real time. It’s possible to monitor user activity in a Microsoft SharePoint environment, regardless of how data is stored. We can even monitor file-based non-relational databases. We can perform OS, application, and database assessments through the same system.
A slight increase in the scope of data collection means much broader application-layer support. Not that you necessarily need it – sometimes you want a narrow database focus, while other times you will need to cast a wider net. We will describe all the options to help you decide which best meets your needs.
Let’s take a look at some of the core data collection methods used by customers today:
Local OS/Protocol Stack Agents: A software ‘agent’ is installed on the database server to capture SQL statements as they are sent to the databases. The events captured are returned to the remote Database Security Platform. Events may optionally be inspected locally by the agent for real-time analysis and response. The agents are either deployed into the host’s network protocol stack or embedded into the operating system, to capture communications to and from the database. They see all external SQL queries sent to the database, including their parameters, as well as query results. Most critically, they should capture administrative activity from the console that does not come through normal network connections. Some agents provide an option to block malicious activity – either by dropping the query rather than transmitting it to the database, or by resetting the suspect user’s database connection.
Most agents embed into the OS in order to gain full session visibility, and so require a system reboot during installation. Early implementations struggled with reliability and platform support problems, causing system hangs, but these issues are now fortunately rare. Current implementations tend to be reliable, with low overhead and good visibility into database activity. Agents are a basic requirement for any DSP solution, as they are a relatively low-impact way of capturing all SQL statements – including those originating from the console and arriving via encrypted network connections.
Performance impact these days is very limited, but you will still want to test before deploying into production.
Network Monitoring: An exceptionally low-impact method of monitoring SQL statements sent to the database. By monitoring the subnet (via network mirror ports or taps) statements intended for a database platform are ‘sniffed’ directly from the network. This method captures the original statement, the parameters, the returned status code, and any data returned as part of the query operation. All collected events are returned to a server for analysis. Network monitoring has the least impact on the database platform and remains popular for monitoring less critical databases, where capturing console activity is not required.
Lately the line between network monitoring capabilities and local agents has blurred. Network monitoring is now commonly deployed via a local agent monitoring network traffic on the database server itself, thereby enabling monitoring of encrypted traffic. Some of these ‘network’ monitors still miss console activity – specifically privileged user activity. On a positive note, installation as a user process does not require a system reboot or cause adverse system-wide side effects if the monitor crashes unexpectedly. Users still need to verify that the monitor is collecting database response codes, and should determine exactly which local events are captured, during the evaluation process.
Memory Scanning: Memory scanners read the active memory structures of a database engine, monitoring new queries as they are processed. Deployed as an agent on the database platform, the memory scanning agent activates at pre-determined intervals to scan for SQL statements. Most memory scanners immediately analyze queries for policy violations – even blocking malicious queries – before returning results to a central management server. There are numerous advantages to memory scanning, as these tools see every database operation, including all stored procedure execution. Additionally, they do not interfere with database operations.
You’ll need to be careful when selecting a memory scanning product – the quality of the various products varies. Most vendors only support memory scanning on select Oracle platforms – and do not support IBM, Microsoft, or Sybase. Some vendors don’t capture query variables – only the query structure – limiting the usefulness of their data. And some vendors still struggle with performance, occasionally missing queries. But other memory scanners are excellent enterprise-ready options for monitoring events and enforcing policy.
Database Audit Logs: Database Audit Logs are still commonly used to collect database events. Most databases have native auditing features built in; they can be configured to generate an audit trail that includes system events, transactional events, user events, and other data definitions not available from any other sources. The stream of data is typically sent to one or more locations assigned by the database platform, either in a file or within the database itself. Logging can be implemented through an agent, or logs can be queried remotely from the DSP platform using SQL.
Audit logs are preferred by some organization because they provide a series of database events from the perspective of the database. The audit trail reconciles database rollbacks, errors, and uncommitted statements – producing an accurate representation of changes made. But the downsides are equally serious. Historically, audit performance was horrible. While the database vendors have improved audit performance and capabilities, and DSP vendors provide great advice for tuning audit trails, bias against native auditing persists. And frankly, it’s easy to mess up audit configurations. Additionally, the audit trail is not really intended to collect
SELECT statements – viewing data – but focused on changes to data or the database system. Finally, as the audit trail is stored and managed on the database platform, it competes heavily for database resources – much more than other data collection methods. But given the accuracy of this data, and its ability to collect internal database events not available to network and OS agent options, audit remains a viable – if not essential – event collection option.
One advantage of using a DSP tool in conjunction with native logs is that it is easier to securely monitor administrator activity. Admins can normally disable or modify audit logs, but a DSP tool may mitigate this risk.
Discovery and Assessment Sources
Network Scans: Most DSP platforms offer database discovery capabilities, either through passive network monitoring for SQL activity or through active TCP scans of open database ports. Additionally, most customers use remote credentialed scanning of internal database structures for data discovery, user entitlement reporting, and configuration assessment. None of these capabilities are new, but remote scanning with read-only user credentials is the the standard data collection method for preventative security controls.
There are many more methods of gathering data and events, but we’re focusing on the most commonly used. If you are interested in a more depth on the available options, our blog post on Database Activity Monitoring & Event Collection Options provides much greater detail. For those of you who follow our stuff on a regular basis, there’s not a lot of new information there.
Expanded Collection Sources
A couple new features broaden the focus of DAM. Here’s what’s new:
File Activity Monitoring: One of the most intriguing recent changes in event monitoring has been the collection of file activity. File Activity Monitoring (FAM) collects all file activity (read, create, edit, delete, etc.) from local file systems and network file shares, analyzes the activity, and – just like DAM – alerts on policy violations. FAM is deployed through a local agent, collecting user actions as they are sent to the operating system. File monitors cross reference requests against Identity and Access Management (e.g., LDAP and Active Directory) to look up user identities. Policies for security and compliance can then be implemented on a group or per-user basis.
This evolution is important for two reasons. The first is that document and data management systems are moving away from strictly relational databases as the storage engine of choice. Microsoft SharePoint, mentioned above, is a hybrid of file management and relational data storage. FAM provides a means to monitor document usage and alert on policy violations. Some customers need to address compliance and security issues consistently, and don’t want to differentiate based on the idiosyncrasies of underlying storage engines, so FAM event collection offers consistent data usage monitoring.
Another interesting aspect of FAM is that most of the databases used for Big Data are non-relational file-based data stores. Data elements are self-describing and self-indexing files. FAM provides the basic capabilities of file event collection and analysis, and we anticipate the extension of these capabilities to cover non-relational databases. While no DSP vendor offers true NoSQL monitoring today, the necessary capabilities are available in FAM solutions.
Application Monitoring: Databases are used to store application data and persist application state. It’s almost impossible to find a database not serving an application, and equally difficult to find an application that does not use a database. As a result monitoring the database is often considered sufficient to understand application activity. However most of you in IT know database monitoring is actually inadequate for this purpose. Applications use hundreds of database queries to support generic forms, connect to databases with generic service accounts, and/or uses native application codes to call embedded stored procedures rather than direct SQL queries. Their activity may be too generic, or inaccessible to traditional Database Activity Monitoring solutions. We now see agents designed and deployed specifically to collect application events, rather than database events. For example SAP transaction codes can be decoded, associated with a specific application user, and then analyzed for policy violations. As with FAM, much of the value comes from better linking of user identity to activities. But extending scope to embrace the application layer directly provides better visibility into application usage and enables more granular policy enforcement.
This post has focused on event collection for monitoring activity. In our next section we will delve into greater detail on how these advancements are put to use: Policy Enforcement.
Posted at Wednesday 7th March 2012 10:28 pm
(0) Comments •
By Adrian Lane
As I stated in the intro, Database Security Platform (DSP, to save us writing time and piss off the anti-acronym crowd) differs from DAM in a couple ways. Let’s jump right in with a definition of DSP, and then highlight the critical differences between DAM and DSP.
Our old definition for Database Activity Monitoring has been modified as follows:
Database Security Platforms, at a minimum, assess database security, capture and record all database activity in real time or near real time (including administrator activity); across multiple database types and platforms; and alert and block on policy violations.
This distinguishes Database Security Platforms from Database Activity Monitoring in four key ways:
- Database security platforms support both relational and non-relational databases.
- All Database Security Platforms include security assessment capabilities.
- Database Security Platforms must have blocking capabilities, although they aren’t always used.
- Database Security Platforms often include additional protection features, such as masking or application security, which aren’t necessarily included in Database Activity Monitors.
We are building a new definition due to the dramatic changes in the market. Almost no tools are limited to merely activity monitoring any more, and we see an incredible array of (different) major features being added to these products. They are truly becoming a platform for multiple database security functions, just as antivirus morphed into Endpoint Protection Platforms by adding everything from whitelisting to intrusion prevention and data loss prevention.
Here is some additional detail:
- The ability to remotely audit all user permissions and configuration settings. Connecting to a remote database with user level credentials, scanning the configuration settings, then comparing captured data against an established baseline. This includes all external initialization files as well as all internal configuration settings, and may include additional vulnerability tests.
- The ability to independently monitor and audit all database activity including administrator activity, transactions, and data (
SELECT) requests. For relational platforms this includes DML, DDL, DCL, and sometimes TCL activity. For non-relational systems this includes ownership, indexing, permissions and content changes. In all cases read access is recorded, along with the meta-data associated with the action (user identity, time, source IP, application, etc).
- The ability to store this activity securely outside the database.
- The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). These tools work with multiple relational (e.g., Oracle, Microsoft, and IBM) and quasi-relational (ISAM, Terradata, and Document management) platforms.
- The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and prevent database administrators from tampering with logs and activity records – or at least make it nearly impossible.
- The ability to protect data and databases – both alerting on policy violations and taking preventative measure to prevent database attacks. Tools don’t just record activity – they provide real-time monitoring, analysis, and rule-based response. For example, you can create a rule that masks query results when a remote
SELECT command on a credit card column returns more than one row.
- The ability to collect activity and data from multiple sources. DSP collects events from the network, OS layer, internal database structures, memory scanning, and native audit layer support. Users can tailor deployments to their performance and compliance requirements, and collect data from sources best for their requirements. DAM tools have traditionally offered event aggregation but DSP requires correlation capabilities as well.
DSP is, in essence, a superset of DAM applied to a broader range of database types and platforms.
Let’s cover the highlights in more detail:
- Databases: It’s no longer only about big relational platforms with highly structured data – but now also in non-relational platforms. Unstructured data repositories, document management systems, quasi-relational storage structures, and tagged-index files are being covered. So the number of query languages being analyzed continues to grow.
- Assessment: “Database Vulnerability Assessment” is offered by nearly every Database Activity Monitoring vendor, but it is seldom sold separately. These assessment scans are similar to general platform assessment scanners but focus on databases – leveraging database credentials to scan internal structures and metadata. The tools have evolved to scan not only for known vulnerabilities and security best practices, but to include a full scan of user accounts and permissions. Assessment is the most basic preventative security measure and a core database protection feature.
- Blocking: Every database security platform provider can alert on suspicious activity, and the majority can block suspect activity. Blocking is a common customer requirement – it is only applied to a very small fraction of databases, but has nonetheless become a must-have feature. Blocking requires the agent or security platform to be deployed ‘inline’ in order to intercept and block incoming requests before they execute.
- Protection: Over and above blocking, we see traditional monitoring products evolving protection capabilities focused more on data and less on database containers. While Web Application Firewalls to protect from SQL injection attacks have been bundled with DAM for some time, we now also see several types of query result filtering.
One of the most interesting aspects of this evolution is how few architectural changes are needed to provide these new capabilities. DSP still looks a lot like DAM, but functions quite differently. We will get into architecture later in this series. Next we will go into detail on the features that define DSP and illustrate how they all work together.
Posted at Monday 6th February 2012 11:05 pm
(0) Comments •
By Adrian Lane
We love the Totally Transparent Research process. Times like this – where we hit upon new trends, discover unexpected customer uses cases, or discover something going on behind the scenes – are when our open model really shows its value. We started a Database Activity Monitoring 2.0 series last October and suddenly halted because our research showed that platform evolution has changed from convergence to independent visions of database security, with customer requirements splintering.
These changes are so significant that we need to publicly discuss them so can you understand why we are suddenly making a significant departure from the way we describe a solution we have been talking about for the past 6+ years. Especially since Rich, back in his Gartner days, coined the term “Database Activity Monitoring” in the first place. What’s going on behind the scenes should help you understand how these fundamental changes alter the technical makeup of products and require new vocabulary to describe what we see.
With that, welcome to the reboot of DAM 2.0. We renamed this series Understanding and Selecting Database Security Platforms to reflect massive changes in products and the market. We will fully define why this is the case as we progress through this series, but for now suffice it to say that the market has simply expanded beyond the bounds of the Database Activity Monitoring definition.
DAM is now only a subset of the Database Security Platform market. For once this isn’t some analyst firm making up a new term to snag some headlines – as we go through the functions and features you’ll see that real products on the market today go far beyond mere monitoring. The technology trends, different bundles of security products, and use cases we will present, are best reflected by the term “Database Security Platform”, which most accurately reflects the state of the market today.
This series will consist of 6 distinct parts, some of which appeared in our original Database Activity Monitoring paper.
- Defining DSP: Our longstanding definition for DAM is broad enough to include many of the changes, but will be slightly updated to incorporate the addition of new data collection and analysis options. Ultimately the core definition does not change much, as we took into account two anticipated trends when we initially created it, but a couple subtle changes encompass a lot more real estate in the data center.
- Available Features: Different products enter the DSP market from different angles, so we think it best to list out all the possible major features. We will break these out into core components vs. additional features to help focus on the important ones.
- Data Collection: The minimum feature set for DAM included database queries, database events, configuration data, audit trails, and permission management for several years. The continuing progression of new data and event sources, from both relational and non-relational data sources, extends the reach of the security platform to include many new application types. We will discuss the implications in detail.
- Policy Enforcement: The addition of hybrid data and database security protection bundled into a single product. Masking, redaction, dynamically altered query results, and even tokenization build on existing blocking and connection reset options to offer better granularity of security controls. We will discuss the technologies and how they are bundled to solve different problems.
- Platforms: The platform bundles, and these different combinations of capabilities, best demonstrate the change from DAM to DSP. There are bundles that focus on data security, compliance policy administration, application security, and database operations. We will spend time discussing these different visions and how they are being positioned for customers.
- Use Cases & Market Drivers: The confluence of what companies are looking to secure mirrors adoption of new platforms, such as collaboration platforms (SharePoint), cloud resources, and unstructured data repositories. Compliance, operations management, performance monitoring, and data security requirements follow the adoption of these new platforms; which has driven the adaptation and evolution of DAM into DSP. We will examine these use cases and how the DSP platforms are positioned to address demand.
A huge proportion of the original paper was influenced by the user and vendor communities (I can confirm this – I commented on every post during development, a year before I joined Securosis – Adrian). As with that first version, we strongly encourage user and vendor participation during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. We think you will enjoy this series, so we look forward to your participation!
Next up: Defining DSP!
Posted at Monday 30th January 2012 8:45 pm
(0) Comments •
One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning.
We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with:
- We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2?
This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do.
- How can we monitor users connecting to cloud services over SSL?
This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it.
- Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network?
Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity).
- What’s the CSA STAR project?
STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices.
- How can we encrypt big data sets without changing our applications?
This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear.
That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes.
Posted at Monday 22nd August 2011 7:31 pm
(0) Comments •
By Adrian Lane
For Database Activity Monitoring, Virtual Appliances result from hardware appliances not fitting into virtualization models. Management, hardware consolidation, resource and network abstraction, and even power savings don’t fit. Infrastructure as a Service (IaaS) disrupts the hardware model. So DAM vendors pack their application stacks into virtual machine images and sell those. It’s a quick win for them, as very few changes are needed, and they escape the limitations of hardware. A virtual appliance is ‘built’ and configured like a hardware appliance, but delivered without the hardware. That means all the software – both third party and vendor created – contained within the hardware appliances is now wrapped in a virtual machine image. This image is run and managed by a Virtual Machine Manager (VMware, Xen, Hyper-V, etc.), but otherwise functions the same as a physical appliance.
In terms of benefits, virtual appliances are basically the opposite of hardware appliances. Like the inhabitants of mirror universes in Star Trek, the participants look alike but act very differently. Sure, they share some similarities – such as ease of deployment and lack of hardware dependancies – but many aspects are quite different than software or hardware based DAM.
Advantages over physical hardware include:
- Scale: Taking advantage of the virtual architecture, it’s trivial to spin up new appliances to meet demand. Adding new instances is a simple VMM operation. Multiple instances still collect and process events, and send alerts and event data to a central appliance for processing. You still have to deploy software agents, and manage connections and credentials, of course.
- Cloud & Virtual Compatibility: A major issue with hardware appliances is their poor fit in cloud and virtual environments. Virtual instances, on the other hand, can be configured and deployed in virtual networks to both monitor and block suspicious activity.
- Management: Virtual DAM can be managed just like any other virtual machine, within the same operational management framework and tools. Adding resources to the virtual instance is much easier than upgrading hardware. Patching DAM images is easier, quicker, and less disruptive. And it’s easy to move virtual appliances to account for changes in the virtual network topology.
- Performance: This is in stark contrast to hardware appliance performance. Latency and performance are both cited by customers as issues. Not running on dedicated hardware has a cost – resources are neither dedicated nor tuned for DAM workloads. Event processing performance is in line with software, which is not a concern. The more serious issue is disk latency and event transfer speeds, both of which are common complaints. Deployment of virtual DAM is no different than most virtual machines – as always, you must consider storage connection latency and throughput. DAM is particularly susceptible to latency – it is designed to function in real time monitoring – so it’s important to monitor I/O performance and virtual bottlenecks, and adjust accordingly.
- Elasticity: In practice the VMM is far more elastic the the application – virtual DAM appliances are very easy to replicate, but don’t take full advantage of added resources without reconfiguration. In practice added memory & processing power help, but as with software, virtual appliances require configuration to match customer environments.
- Cost: Cost is not necessarily either an advantage or a problem, but it is a serious consideration when moving from hardware to a virtual model. Surprisingly, I find that customers using virtual environments have more – albeit smaller – databases. And thus they have more virtual appliances backing those databases. Ultimately, cost depends entirely on the vendor’s licensing model. If you’re paying on a per-appliance or per-database model costs go up. To reduce costs either consolidate database environments or renegotiate pricing.
I did not expect to hear about deconsolidation of database images when speaking with customers. But customer references demonstrate that virtual appliances are added to supplement existing hardware deployments – either to fill in capacity or to address virtual networking issues for enterprise customers. Interestingly, there is no trend of phasing either out in favor of the other, but customers stick with the hybrid approach. If you have user or vendor feedback, please comment.
Next I will discuss data collection techniques. These are important for a few reasons – most importantly because every DAM deployment relies on a software agent somewhere to collect events. It’s the principal data collection option – so the agent affects performance, management, and separation of duties.
Posted at Tuesday 3rd May 2011 2:00 pm
(0) Comments •
By Adrian Lane
“It’s anything you want it to be – it’s software!” – Adrian.
Database Activity Monitoring software is deployed differently than DAM appliances. Whereas appliances are usually two-tier event collector / manager combinations which divide responsibilities, software deployments are as diverse as customer environments. It might be stand-alone servers installed in multiple geographic locations, loosely coupled confederations each performing different types of monitoring, hub & spoke systems, everything on a single database server, all the way up to N-tier enterprise deployments. It’s more about how the software is configured and how resources are allocated by the customer to address their specific requirements. Most customers use a central management server communicating directly with software agents with collect events. That said, the management server configuration varies from customer to customer, and evolves over time.
Most customers divide the management server functions across multiple machines when they need to increase capacity, as requirements grow. Distributing event analysis, storage, management, and reporting across multiple machines enables tuning each machine to its particular task; and provides additional failover capabilities. Large enterprise environments dedicate several servers to analyzing events, linking those with other servers dedicated to relational database storage. This later point – use of relational database storage – is one of the few major differences between software and hardware (appliance) embodiments, and the focus of the most marketing FUD (Fear, Uncertainty, and Doubt) in this category. Some IT folks consider relational storage a benefit, others a detriment, and some a bit of both; so it’s important to understand the tradeoffs. In a nutshell relational storage requires more resources to house and manage data; but in exchange provides much better analysis, integration, deployment, and management capabilities. Understanding the differences in deployment architecture and use of relational storage are key to appreciating software’s advantages.
Advantages of software over appliances include:
- Flexible Deployment: Add resources and tune your platforms specifically to your database environment, taking into account the geographic and logical layout of your network. Whether it’s thousands of small databases or one very large database – one location or thousands – it’s simply a matter of configuration. Software-based DAM offers a half-dozen different deployment architectures, with variations on each to support different environments. If you choose wrong simply reconfigure or add additional resources, rather than needing to buy new appliances.
- Scalability & Modular Architecture: Software DAM scales in two ways: additional hardware resources and “divide & conquer”. DAM installations scale with processor and memory upgrades, or you can move the installation to a larger new machine to support processing more events. But customers more often choose to scale by partitioning the DAM software deployment across multiple servers – generally placing the DAM engine on one machine, and the relational database on another. This effectively doubles capacity, and each platform can be tuned for its function. This model scales further with multiple event processing engines on the front end, letting the database handle concurrent insertions, or by linking multiple DAM installations via back end database. Each software vendor offers a modular architecture, enabling you to address resource constraints with very good granularity.
- Relational Storage: Most appliances use flat files to store event data, while software DAM uses relational storage. Flat files are extraordinarily fast at writing new events to disk, supporting higher data capture rates than equivalent software installations. But the additional overhead of the relational platform is not wasted – it provides concurrency, normalization, indexing, backup, partitioning, data encryption, and other services. Insertion rates are lower, while complex reports and forensic analyses are faster. In practice, software installations can directly handle more data than DAM appliances without resorting to third-party tools.
- Operations: As Securosis just went through a deployment analysis exercise, we found that operations played a surprisingly large part in our decision-making process. Software-based DAM looks and behaves like the applications your operations staff already manages. It also enables you to choose which relational platform to store events on – whether IBM, Oracle, MS SQL Server, MySQL, Derby, or whatever you have. You can deploy on the OS (Linux, HP/UX, Solaris, Windows) and hardware (HP, IBM, Oracle, Dell, etc.) you prefer and already own. There is no need to re-train IT operations staff because management fits within existing processes and systems. You can deploy, tune, and refine the DAM installation as needed, with much greater flexibility to fit your model. Obviously customers who don’t want to manage extra software prefer appliances, but they are dependent on vendors or third party providers for support and tuning, and need to provide VPN access to production networks to enable regular maintenance.
- Cost: In practice, enterprise customers realize lower costs with software. Companies that have the leverage to buy hardware at discounts and/or own software site licenses can scale DAM across the organization at much lower total cost. Software vendors offer tiered pricing and site licenses once customers reach a certain database threshold. Cost per DAM installation goes down, unlike appliance pricing which is always basically linear. And the flexibility of software allows more efficient deployment of resources. Site licenses provide cost containment for large enterprises that roll out DAM across the entire organization. Midmarket customers typically don’s realize this advantage – at least not to the same extent – but ultimately software costs less than appliances for enterprises.
- Integration: Theoretically, appliances and software vendors all offer integration with third party services and tools. All the Database Activity Monitoring deployment choices – software, hardware, and virtual appliances – offer integration with workflow, trouble-ticket, log management, and access control systems. Some also provide integration with third-party policy management and reporting services. In practice the software model offers additional integration points that provide more customer options. Most of these additional capabilities are thanks to the underlying relational databases – leveraging additional tools and procedural interfaces. As a result, software DAM deployments provide more options for supporting business analytics, SIEM, storage, load balancing, and redundancy.
As I mentioned in the previous post, most of these advantages are not visible during the initial deployment phases or Proof of Concept (PoC). Over the product lifespan, however, these benefits really pay off, and are often essential to enterprise customers. Still, it’s not all a bed of roses:
- Time to Install & Configure: Every DAM instance must be installed and configured prior to deployment. Hardware-based appliances come pre-configured and virtual appliances deployed from snapshots and pre-configured images. You can create installer scripts and images to reduce repetitious installs, but it’s clearly more work to get software up and running.
- Security: In theory, software security should be equivalent to appliance security, and most software-based DAM installations are just as secure. In practice, however, appliance vendors provide better security up front. Software vendors provide guidance and best practices, but given the diversity of their customers’ deployment models, they cannot fully configure security through post-install automation scripts. It’s up to the IT operations and management team to close the gap between the deployment model and their own security guidelines. With software and hardware you have a logical separation of policies, credentials, access to events, and DAM management. When you create a physical – as opposed to logical – separation of roles through multiple installations, the cost of software scaling is lower for organizations which need this model.
- Patching: Software, hardware, and virtual appliances all need to be patched and updated. Appliances are maintained by the vendor. With software you get to do all the patching.
- Hardware: Just because you did not buy an appliance does not mean you don’t need to buy hardware. Some organization have hardware spares and can easily provision DAM on inventory they already have, but most need to requisition new stuff. Worse – as I know from practical experience – customers test software deployments (PoC) on whatever old garbage in the closet nobody can use for real work, while appliance vendors FedEx sleek new boxes with much more oomph. In this kind of rigged comparison, appliances of course perform much better. Just remember that you need to do capacity planning, budgeting, and going up the approval chain for both the DAM product and hardware. The good news is most organizations are used to this, but it’s still a hassle.
Most DAM vendor see themselves as software vendors – even the ones which bundle their code into hardware as their primary distribution model. They do write software, especially the complex agents that collect events, but it’s not the same. Make no mistake: there are significant differences between pure software and virtual appliances, as will become readily apparent when we discuss Virtual Appliances, next.
Posted at Thursday 28th April 2011 8:13 pm
(0) Comments •
By Adrian Lane
One thing I don’t miss from my vendor days in the Database Activity Monitoring market is the competitive infighting. Sure, I loved to do the competitive analyses to see how each vendor viewed itself, and how they were all trying to differentiate their products. I did not enjoy going into a customer shop after a competitor “poisoned the well” with misleading statements, evangelical pitches touting the right way to tackle a problem, or flat-out lies. Being second into a customer account meant having to deal with the dozen land mines left in their minds, and explaining those issues just to get even. The common land mines were about performance, lack of impact on IT systems, and platform support. The next vendor in line countered with architectures that did not scale, difficulties in deployment, inability to collect important events, and management complexity of every other product on the market. The customer often cannot determine who’s lying until after they purchase something and see if it does what the vendor claimed, so this game continues until the market reaches a certain level of maturity.
With Database Activity Monitoring, the appliance vs. software debate is still raging. It’s not front and center in most product marketing materials. It’s not core to solving most security challenges. It is positioned as an advantage behind the scenes, especially during bake-offs between vendors, to undermine competitors. Criticism not based on the way events are processed, UI, or event storage – but simply on the deployment model. Hardware is better than software. Software is better than hardware. This virtual hardware appliance is just as good as software. And so on.
This is an area where I can help customers understand the tradeoffs of the different models. Today I am kicking off a short series to discuss tradeoffs between appliance, software, and virtual appliance implementations of Database Activity Monitoring systems. I’ll research the current state of the DAM market and highlight the areas you need to focus on to determine which is right for you. I’ll also share some personal experiences that illustrate the difference between the theoretical and the practical. The series will be broken into four parts:
- Hardware: Discussion of hardware appliances dedicated to Database Activity Monitoring. I’ll cover the system architecture, common deployment models, and setup. Then we’ll delve into the major benefits and constraints of appliances including performance, scalability, architecture, and disaster recovery.
- Software: Contrasting DAM appliances with software architecture and deployment models; then cover pros and cons including installation and configuration, flexibility, scalability and performance, and installation/setup
- Virtual Appliances: Virtualization and cloud models demand adaptation for many security technologies, and DAM is no different. Here I will discuss why virtual appliances are necessary – contrasting against with hardware-based appliances – and consider practical considerations that crop up.
- Data Collection and Management: A brief discussion of how data collection and management affect DAM. I will focus on areas that come up in competitive situations and tend to confuse buying decisions.
I have been an active participant in these discussions over the last decade, and I worked for a DAM software provider. As a result I need to acknowledge, up front, my historical bias in favor of software. I have publicly stated my preference for software in the past based upon my experiences as a CIO and author of DAM technology. As an analyst, however, I have come to recognize that there is no single ‘best’ technology. My own experiences sometimes differ from customer reality, and I undersetand that every customer has its own preferred way of doing things.
But make no mistake – the deployment model matters! With that said, there is no single ‘best’ model. Hardware, software, and virtual appliance – each has advantages and disadvantages. What works for each customer depends on its specific needs. And just like vendors, customer will have their own biases. What’s important is what is ‘better’ for the consumer. I will provide a list of pros and cons, to help you decide what will work best. I will point out my own preferences (bias), and as always you are welcome to call ‘BS’ on anything in this series you don’t accept.
Perhaps more than any other series I have ever written at Securosis, I want to encourage feedback from the security and IT practitioner community. Why? Because I have witnessed too many software solutions that don’t scale as advertised. I am aware of several hardware deployments that cost the customer almost 4X the original bid. I am aware of software – my own firm was guilty – so inflexible we were booted from the customer site. I know these issues still occur, so my goal is to help wade through the competitive puffery. I encourage you to share what have you seen, what you prefer, and why, as it helps the community.
Posted at Monday 11th April 2011 9:00 pm
(0) Comments •
By Adrian Lane
McAfee announced this morning its intention to acquire Sentrigo, a Database Activity Monitoring company. McAfee has had a partnership with Sentrigo for a couple years, and both companies have cooperatively sold the Sentrigo solution and developed high-level integration with McAfee’s security management software. McAfee’s existing enterprise customer base has shown interest in Database Activity Monitoring, and DAM is no longer as much of an evangelical sale as it used to be. Sentrigo is a small firm and integration of the two companies should go smoothly.
Despite persistent rumors of larger firms looking to buy in this space, I am surprised that McAfee finally acquired Sentrigo. McAfee, Symantec, and EMC are the names that kept popping up as interested parties, but Sentrigo wasn’t the target discussed. Still, this looks like a good fit because the core product is very strong, and it fills a need in McAfee’s product line. The aspects of Sentrigo that are a bit scruffy or lack maturity are the areas McAfee would want to tailor anyway: workflow, UI, reporting, and integration.
I have known the Sentrigo team for a long time. Not many people know that I tried to license Sentrigo’s memory scanning technology – back in 2006 while I was at IPLocks. Several customers used the IPLocks memory scanning option, but the scanning code we licensed from BMC simply wasn’t designed for security. I heard that Sentrigo architected their solution correctly and wanted to use it. Alas, they were uninterested in cooperating with a competitor for some odd reason, but I have maintained good relations with their management team since. And I like the product because it offers a (now) unique option for scraping SQL right out of the database memory space.
But there is a lot more to this acquisition that just memory scraping agents. Here are some of the key points you need to know about:
Key Points about the Acquisition
- McAfee is acquiring a Database Activity Monitoring (DAM) technology to fill out their database security capabilities. McAfee obviously covers the endpoints, network, and content security pieces, but was missing some important pieces for datacenter application security. The acquisition advances their capabilities for database security and compliance, filling one of the key gaps.
- Database Activity Monitoring has been a growing requirement in the market, with buying decisions driven equally by compliance requirements and response to escalating use of SQL injection attacks. Interest in DAM was previously to address insider threats and Sarbanes-Oxley, but market drivers are shifting to blocking external attacks and compensating controls for PCI.
- Sentrigo will be wrapped into the Risk and Compliance business unit of McAfee, and I expect deeper integration with McAfee’s ePolicy Orchestrator.
- Selling price has not been disclosed.
- Sentrigo is one of the only DAM vendors to build cloud-specific products (beyond a simple virtual appliance). The real deal – not cloudwashing.
What the Acquisition Does for McAfee
- McAfee responded to Oracle’s acquisition of Secerno, and can now offer a competitive product for activity monitoring as well as virtual patching of heterogeneous databases (e.g., Oracle, IBM, etc).
- While it’s not well known, Sentrigo also offers database vulnerability assessment. Preventative security checks, patch verification, and reports are critical for both security and compliance.
- One of the reasons I like the Sentrigo technology is that it embeds into the database engine. For some deployment models, including virtualized environments and cloud deployments, you don’t need to worry about the underlying environment supporting your monitoring functions. Most DAM vendors offer security sensors that move with the database in these environments, but are embedded at the OS layer rather than the database layer. As with transparent database encryption, Sentrigo’s model is a bit easier to maintain.
What This Means for the DAM Market
- Once again, we have a big name technology company investing in DAM. Despite the economic downturn, the market has continue to grow. We no longer estimate the market size, as it’s too difficult to find real numbers from the big vendors, but we know it passed $100M a while back.
- We are left with two major independent firms that offer DAM; Imperva and Application Security Inc. Lumigent, GreenSQL, and a couple other firms remain on the periphery. I continue to hear acquisition interest, and several firms still need this type of technology.
- Sentrigo was a late entry into the market. As with all startups, it took them a while to fill out the product line and get the basic features/functions required by enterprise customers. They have reached that point, and with the McAfee brand, there is now another serious competitor to match up against Application Security Inc., Fortinet, IBM/Guardium, Imperva, Nitro, and Oracle/Secerno.
What This Means for Users
- Sentrigo’s customer base is not all that large – I estimate fewer than 200 customers world wide, with the average installation covering 10 or so databases. I highly doubt there will be any technology disruption for existing customers. I also highly doubt this product will become shelfware in McAfee’s portfolio, as McAfee has internally recognized the need for DAM for quite a while, and has been selling the technology already.
- Any existing McAfee customers using alternate solutions will be pressured to switch over to Sentrigo, and I imagine will be offered significant discounts to do so. Sentrigo’s DAM vision – for both functionality and deployment models – is quite different than its competitors, which will make it harder for McAfee to convince customers to switch.
- The huge upside is the possibility of additional resources for Sentrigo development. Slavik Markovich’s team has been the epitome of a bootstrapping start-up, running a lean organization for many years now. They deserve congratulations for making it this less than
$10M $20M in VC funds. They have been slowly and systematically adding enterprise features such as user management and reporting, broadening platform support, and finally adding vulnerability assessment scanning. The product is still a little rough around the edges; and lacks some maturity in UI and capabilities compared to Imperva, Guardium, and AppSec – those products have been fleshing out their capabilities for years more.
In a nutshell, I can say – having done two formal in-depth reviews of the product – that Sentrigo’s core technology is well architected. What’s more important is that their data collectors – their market differentiators – are implemented very well. It’s an incredible accomplishment to scan the active memory of a transactional database system without causing a major performance impact. McAfee has some work to do with the product, but they are getting a solid product for their money … however much that may be.
Congratulations to both Sentrigo and McAfee!
Posted at Wednesday 23rd March 2011 4:37 pm
(0) Comments •
When Mike was reviewing the latest Pragmatic Data Security post he nailed me on being too apologetic for telling people they need to spend money on data-security specific tools. (The line isn’t in the published post).
Just so you don’t think Mike treats me any nicer in private than he does in public, here’s what he said:
Don’t apologize for the fact that data discovery needs tools. It is what it is. They can be like almost everyone else and do nothing, or they can get some tools to do the job. Now helping to determine which tools they need (which you do later in the post) is a good thing. I just don’t like the apologetic tone.
As someone who is often a proponent for tools that aren’t in the typical security arsenal, I’ve found myself apologizing for telling people to spend money. Partially, it’s because it isn’t my money… and I think analysts all too often forget that real people have budget constraints. Partially it’s because certain users complain or look at me like I’m an idiot for recommending something like DLP.
I have a new answer next time someone asks me if there’s a free tool to replace whatever data security tool I recommend:
Did you build your own Linux box running
ipfw to protect your network, or did you buy a firewall?
The important part is that I only recommend these purchases when they will provide you with clear value in terms of improving your security over alternatives. Yep, this is going to stay a tough sell until some regulation or PCI-like standard requires them.
Thus I’m saying, here and now, that if you need to protect data you likely need DLP (the real thing, not merely a feature of some other product) and Database Activity Monitoring. I haven’t found any reasonable alternatives that provide the same value.
There. I said it. No more apologies – if you have the need, spend the money. Just make sure you really have the need, and the tool you are looking at really delivers the value, since not all solutions are created equal.
Posted at Monday 1st February 2010 5:54 pm
(3) Comments •
In the Discovery phase we figure where the heck our sensitive information is, how it’s being used, and how well it’s protected. If performed manually, or with too broad an approach, Discovery can be quite difficult and time consuming. In the pragmatic approach we stick with a very narrow scope and leverage automation for greater efficiency. A mid-sized organization can see immediate benefits in a matter of weeks to months, and usually finish a comprehensive review (including all endpoints) within a year or less.
Discover: The Process
Before we get into the process, be aware that your job will be infinitely harder if you don’t have a reasonably up to date directory infrastructure. If you can’t figure out your users, groups, and roles, it will be much harder to identify misuse of data or build enforcement policies. Take the time to clean up your directory before you start scanning and filtering for content. Also, the odds are very high that you will find something that requires disciplinary action. Make sure you have a process in place to handle policy violations, and work with HR and Legal before you start finding things that will get someone fired (trust me, those odds are pretty darn high).
You have a couple choices for where to start – depending on your goals, you can begin with applications/databases, storage repositories (including endpoints), or the network. If you are dealing with something like PCI, stored data is usually the best place to start, since avoiding unencrypted card numbers on storage is an explicit requirement. For HIPAA, you might want to start on the network since most of the violations in organizations I talk to relate to policy violations over email/web/FTP due to bad business processes. For each area, here’s how you do it:
- Storage and Endpoints: Unless you have a heck of a lot of bodies, you will need a Data Loss Prevention tool with content discovery capabilities (I mention a few alternatives in the Tools section, but DLP is your best choice). Build a policy based on the content definition you built in the first phase. Remember, stick to a single data/content type to start. Unless you are in a smaller organization and plan on scanning everything, you need to identify your initial target range – typically major repositories or endpoints grouped by business unit. Don’t pick something too broad or you might end up with too many results to do anything with. Also, you’ll need some sort of access to the server – either by installing an agent or through access to a file share. Once you get your first results, tune your policy as needed and start expanding your scope to scan more systems.
- Network: Again, a DLP tool is your friend here, although unlike with content discovery you have more options to leverage other tools for some sort of basic analysis. They won’t be nearly as effective, and I really suggest using the right tool for the job. Put your network tool in monitoring mode and build a policy to generate alerts using the same data definition we talked about when scanning storage. You might focus on just a few key channels to start – such as email, web, and FTP; with a narrow IP range/subnet if you are in a larger organization. This will give you a good idea of how your data is being used, identify some bad business process (like unencrypted FTP to a partner), and which users or departments are the worst abusers. Based on your initial results you’ll tune your policy as needed. Right now our goal is to figure out where we have problems – we will get to fixing them in a different phase.
- Applications & Databases: Your goal is to determine which applications and databases have sensitive data, and you have a few different approaches to choose from. This is the part of the process where a manual effort can be somewhat effective, although it’s not as comprehensive as using automated tools. Simply reach out to different business units, especially the application support and database management teams, to create an inventory. Don’t ask them which systems have sensitive data, ask them for an inventory of all systems. The odds are very high your data is stored in places you don’t expect, so to check these systems perform a flat file dump and scan the output with a pattern matching tool. If you have the budget, I suggest using a database discovery tool – preferably one with built in content discovery (there aren’t many on the market, as we’ll mention in the Tools section). Depending on the tool you use, it will either sniff the network for database connections and then identify those systems, or scan based on IP ranges. If the tool includes content discovery, you’ll usually give it some level of administrative access to scan the internal database structures.
I just presented a lot of options, but remember we are taking the pragmatic approach. I don’t expect you to try all this at once – pick one area, with a narrow scope, knowing you will expand later. Focus on wherever you think you might have the greatest initial impact, or where you have known problems. I’m not an idealist – some of this is hard work and takes time, but it isn’t an endless process and you will have a positive impact.
We aren’t necessarily done once we figure out where the data is – for approved repositories, I really recommend you also re-check their security. Run at least a basic vulnerability scan, and for bigger repositories I recommend a focused penetration test. (Of course, if you already know it’s insecure you probably don’t need to beat the dead horse with another check). Later, in the Secure phase, we’ll need to lock down the approved repositories so it’s important to know which security holes to plug.
Unlike the Define phase, here we have a plethora of options. I’ll break this into two parts: recommended tools that are best for the job, and ancillary tools in case you don’t have a budget for anything new. Since we’re focused on the process in this series, I’ll skip definitions and descriptions of the technologies, most of which you can find in our Research Library
- Data Loss Prevention (DLP): This is the best tool for storage, network, and endpoint discovery. Nothing else is nearly as effective.
- Database Discovery: While there are only a few tools on the market, they are extremely helpful for finding all the unexpected databases that tend to be floating around most organizations. Some offer content discovery, but it’s usually limited to regular expressions/keywords (which is often totally fine for looking within a database).
- Database Activity Monitoring (DAM): A couple of the tools include content discovery (some also include database discovery). I only recommend DAM in the discover phase if you also intend on using it later for database monitoring – otherwise it’s not the right investment.
- IDS/IPS/Deep Packet Inspection: There are a bunch of different deep packet inspection network tools – including UTM, Web Application Firewalls, and web gateways – that now include basic regular expression pattern matching for “poor man’s” DLP functionality. They only help with data that fits a pattern, they don’t include any workflow, and they usually have a ton of false positives. If the tool can’t crack open file attachments/transfers it probably won’t be very helpful.
- Electronic Discovery, Search, and Data Classification: Most of these tools perform some level of pattern matching or indexing that can help with discovery. They tend to have much higher false positive rates than DLP (and usually cost more if you’re buying new), but if you already have one and budgets are tight they can help.
- Email Security Gateways: Most of the email security gateways on the market can scan for content, but they are obviously limited to only email, and aren’t necessarily well suited to the discovery process.
- FOSS Discovery Tools: There are a couple of free/open source content discovery tools, mostly projects from higher education institutions that built their own tools to weed out improper use of Social Security numbers due to a regulatory change a few years back.
Discover: Case Study
Frank from Billy Bob’s Bait Shop and Sushi Outlet decides to use a DLP tool to help figure out where any unencrypted credit card numbers might be stored. He decides to go with a full suite DLP tool since he knows he needs to scan his network, storage, servers in the retail outlets, and employee systems.
Before turning on the tool, he contacts Legal and HR to set up a process in case they find any employees illegally using these numbers, as opposed to the accidental or business-process leaks he also expects to manage. Although his directory servers are a little messy due to all the short-term employees endemic to retail operations, he’s confident his core Active Directory server is relatively up to date, especially where systems/servers are concerned.
Since he’s using a DLP tool, he develops a three-tier policy to base his discovery scans on:
- Using the one database with stored unencrypted numbers, he creates a database fingerprinting policy to alert on exact matches from that database (his DLP tool uses hashes, not the original values, so it isn’t creating a new security exposure). These are critical alerts.
- His next policy uses database fingerprints of all customer names from the customer database, combined with a regular expression for generic credit card numbers. If a customer name appears with something that matches a credit card number (based on the regex pattern) it generates a medium alert.
- His lowest priority policy uses the default “PCI” category built into his DLP tool, which is predominantly basic pattern matching.
He breaks his project down into three phases, to run during overlapping periods:
- Using those three policies, he turns on network monitoring for email, web, and FTP.
- He begins scanning his storage repositories, starting in the data center. Once he finishes those, he will expand the scans into systems in the retail outlets. He expects his data center scan to go relatively quickly, but is planning on 6-12 months to cover the retail outlets.
- He is testing endpoint discovery in the lab, but since their workstation management is a bit messy he isn’t planning on trying to install agents and beginning scans until the second year of the project.
It took Frank about two months to coordinate with other business/IT units before starting the project. Installing DLP on the network only took a few hours because everything ran through one main gateway, and he wasn’t worried about installing any proxy/blocking technology.
Frank immediately saw network results, and found one serious business process problem where unencrypted numbers were included in files being FTPed to a business partner. The rest of his incidents involved individual accidents, and for the most part they weren’t losing credit card numbers over the monitored channels.
The content discovery portion took a bit longer since there wasn’t a consistent administrative account he could use to access and scan all the servers. Even though they are a relatively small operation, it took about 2 months of full time scanning to get through the data center due to all the manual coordination involved. They found a large number of old spreadsheets with credit card numbers in various directories, and a few in flat files – especially database dumps from development.
The retail outlets actually took less time than he expected. Most of the servers, except at the largest regional locations, were remotely managed and well inventoried. He found that 20% of them were running on an older credit card transaction system that stored unencrypted credit card numbers.
Remember, this is a 1,000 person organization… if you work someplace with five or ten times the employees and infrastructure, your process will take longer. Don’t assume it will take five or ten times longer, though – it all depends on scope, infrastructure, and a variety of other factors.
Posted at Monday 1st February 2010 5:39 pm
(0) Comments •
By Adrian Lane
‘During several recent briefings, chats with customers, and discussions with existing clients, the topic of data collections methods for Database Activity Monitoring has come up. While Rich provided a good overview for the general buyer of DAM products his white paper, he did not go into great depth. I was nonetheless surprised that some people I was discussing the pros and cons of various platforms with, were unaware of the breadth of data collection options available. More shocking was a technical briefing with a vendor in the DAM space who did not appear to be aware of the limitations of their own technology choices … or at least they would not admit to it. Regardless, I thought it might be beneficial to examine the available options in a little greater detail, and talk about some of the pros and cons here.
Database Audit Logs
Summary: Database Audit Logs are, much like they sound, a log of database events that have already happened. The stream of data is typically sent to one or more files created by the database platform, and may reside at the operating system level or may be contained within the database itself. These audit logs contain a mixture of system resource recordings, transactional events, user events, system events, and other data definitions that are not available from other sources. The audit logs are a superset of activity. Logging can be implemented through an agent, or can be queried from the database using normal communication protocols.
Strengths: Best source for accurate data, and the best at ascertaining the state of both data and the database. Breadth of information captured is by far the most complete: all statements are captured, along with trigger and stored procedure execution, batch jobs, system events, and other resource based data. Logs can capture many system events and DML statements that are not always visible through other collection methods. This should be considered one of the two essential methods of data collection for any DAM solution.
Weaknesses: On some platforms the bind variables are not available, meaning that some of the query parameters are not stored with the original query, limiting the value of statement collection. This can be overcome by cross-referencing the transaction logs or, in some cases, the system tables for this information, but at a cost. Select statements are not available, and from a security standpoint, this is a major problem. Performance of the logging function itself can be prohibitive. Older versions of all the database platforms that offered native auditing did so at a very high cost in disk and CPU utilization- upwards of 50% on some platforms. While this has been mitigated to a more manageable percentage, if not properly set up, or if too much information is requested from high transaction rate machines, overhead can still creep over 15% unless carefully deployed. Not all system events are available.
Summary: This type of monitoring offers a way to collect SQL statements sent to the database. By monitoring the subnet, network mirror ports or TAPS, statements intended for a database platform can be ‘sniffed’ directly from the network. This method will capture the original statement, the parameters, and the returned status code, as well as any data that was returned as part of the query operation. Typically an appliance-based solution.
Strengths: No performance impact to the database host, combined with the ability to collecting SQL statements. On legacy hardware, or where service level agreements prohibit any additional load being placed upon the database server, this is an excellent option. Simple and efficient method of collecting failed login activity. Solid, albeit niche applicability.
Weaknesses: Misses console activity, specifically privileged user activity, against the database. As this is almost always a security and compliance requirement, this is a fundamental failing of this data collection method. Sniffers are typically blind to encrypted sessions, although this is still a seldom used feature within most enterprises, and not typically a limiting factor. Misses scheduled jobs that originate in the database. To save disk space, most do not collect the returned data, and some products do a poor job of matching failed status codes to triggering SQL statements. “You don’t know what you don’t know”, meaning that in cases where network traffic is missed, mis-read or dropped, there is no record of the activity. This contrasts with native database auditing where some of the information may be missing, but the activity itself will always be recorded.
OS / Protocol Stack Monitoring
Summary: This is available via agent software that captures statements sent to the databases, and the corresponding responses. The agents are deployed either in the network protocol stack, or embedded into the operating system to capture communications to and from the database. They see an external SQL query sent to the database, along with the associated parameters. These implementations tend to be reliable, and low-overhead, with good visibility into database activity. This should be considered a basic requirement for any DAM solution.
Strengths: This is a low-impact way of capturing SQL statements and parameters sent to the database. What’s more, depending upon how they are implemented, agents may also see all console activity, thus addressing the primary weakness of network monitoring and a typical compliance requirement. They tend to, but do not always, see encrypted sessions as they are ‘above’ the encryption layer.
Weaknesses: In rare cases, activity that occurs through management or OS interfaces is not collected, as the port and/or communication protocol varies and may not be monitored or understood by the agent.
Summary: All database platforms store their configuration and state information within database structures. These structures are rich in information about who is using the database, permissions, resource usage, and other metadata. This monitoring can be implemented as an agent, or the information can be collected by a remote query.
Strengths: For assessment, and for cross referencing status and user information in conjunction with other forms of monitoring.
Weaknesses: Lacks much of the transactional information typically needed. Full query data not available. The information tends to be volatile, and offers little in the way of transactional understanding, or the specific operations that are being performed against the database. Not effective for monitoring directly, but rather useful in a supporting role.
Stored Procedures & Triggers
Summary: This is the original method for database monitoring. Using the database’s native stored procedures and triggers to capture activity and even enforce policies.
Strengths: Non-transactional event monitoring and policy enforcement. Even today, triggers for some types of policy enforcement can be implemented at very low cost to database performance, and offer preventative controls for security and compliance. For example, triggers that make rudimentary checks during the login process to enforce policies about which applications and users can access the database, at which time of day, can be highly effective. And as login events are generally infrequent, the overhead is inconsequential.
Weaknesses: Triggers, especially those that attempt to alter transactional processes, are a huge performance cost if in line with transaction processing. Stored procedure and trigger execution, in line with routine business processing, not only increase latency and processing overhead, but can destabilize applications that use the database as well. The more policies are enforced, the worse performance gets. This method of data collection, for use with general monitoring, has been all but abandoned.
Database Transaction Logs
Summary: Database transaction logs are often confused with audit logs, but they are very different things used for different reasons. The transaction logs, sometimes called ‘redo’ logs, are intended to be used by the database to ensure transactional consistency and data accuracy in the event of a hardware or power failure. For example, on the Oracle database platform, the transaction logs records the statement first, and when instructed by a ‘commit’ or similar statement, writes the intended alterations into the database. Once this operation has been completed successfully, the completed operation is recorded in the audit log.
Should something happen before this data is fully committed to the database, the transaction log contains sufficient information to roll the database backward and/or forward to a consistent state. For this reason, the transaction log records database state before and after the statement was executed. And due to the nature of their role in database operation, these log files are highly volatile. Monitoring is typically accomplished by a software agent, and requires that the data be offloaded to an external processor for policy enforcement, consolidation and reporting.
Strengths: Before and after values are highly desirable, especially in terms of compliance.
Weaknesses: The format of these files is not always published and not guaranteed. The burden of reading them accurately and consistently is up to the vendor, which is why this method of data collection is usually only available on Oracle and SQL Server. Transaction logs can generate an incredible quantity of data, which needs to be filtered by policies to be managable. Despite being designed to ensure consistency of the database, transaction logs are not the best way to understand the state of the database. For example, if a user session is terminated unexpectedly, the data is rolled back, meaning previously collected statements are now invalid and do not represent true database state. Also, on some platforms, there are offline and online copies, and which copies are read impacts the quality and completeness of the analysis.
Summary: Memory scanners are an agent based piece of software that reads the active memory structures of the database engine. On some pre-determined interval, the memory scanning agent examines all statements that have been sent to the database
Strengths: Memory scanning is good for collecting SQL statements from the database. It can collect the original SQL statements as well as all of the variables associated with the statement. They can also understand complex queries and, in some cases, resource usage associated with the query. In some instantiations, they can also examine those statements, comparing the activity with security policy, and send and alert. This can be desirable when near-real time notification is needed, and the delay introduced by network latency when sending the collected information to an appliance or external application is not appropriate. Memory scanning is also highly advantageous on older database platforms where the native auditing introduces a harsh performance penalty, providing much of the same data at a much lower overall cost in resources.
Weaknesses: The collection of statements is performed on the periodic scan of memory. The agent ‘wakes-up’ according to a timer, and typical timer intervals are 5-500 milliseconds. The shorter the interval the more CPU intensive. As the memory structure is conceptually ‘round’, the database will overwrite older statements that may still reside in memory but have been completed. This means that machines under heavy load could overwrite statements before the memory scan commences, and miss statements. The faster the execution of the statement, the more likely this is to be the case.
Memory scanning can also be somewhat fragile; if the vendor changes the memory structures when updating the version of the database, the scanner is often breaks. In this case it might miss statements, or it may find garbage at the memory location that it expects to find a SQL statement.
Summary: Most of the database vendors have unpublished codes and interfaces that turn on various functions and make data available. In the past, these options were intended for debugging purposes, and performance was poor enough that they could not be used in production database environments. As compliance and audit become a big driver in the database security and DAM space, the database vendors are making these features ‘production’ options. The audit data is more complete, the performance is better, and the collection and storage of the data is more efficient. Today, these data streams look a lot like enhanced auditing capabilities, and can be tuned to provide a filtered subset while both offering better performance and lower storage requirements. There are vendors who offer this today, and some who have in the past, but this not typically available, and not for all database platforms. Still, many of the options remain unpublished, but I expect to see more of these made public over time and used by DAM and Systems Management vendors.
Strengths: The data streams tend to be complete, and the data collected is the most accurate source of information on database transactions and the state of the database.
Weaknesses: Not fully public, not fully documented, or in some cases, only available to select development partners.
I may have left one or two out, but these are the important ones to consider. Now, what you do with this data is another long discussion. How you process it, how fast you process it, how you check for security and compliance policies, how it is store and how alerts are generated means there is a lot more ways to differentiate vendors that simply the data that they make available to you. Those discussion are for another time.
Posted at Monday 3rd November 2008 9:35 pm
(0) Comments •