With the holidays upon us, and the weather in Phoenix at that optimal temperature of 50F warmer than wherever people come from, the migration has begun. The snowbirds are back in Phoenix. And all my relatives want to visit. All pretty much at the same time. As I write this I am recovering from 20 contiguous days of four different groups of friends and relatives staying at my home. Overlapping, I might add. And it was glorious – it was great to see each and every one of them – but I heaved a great sigh of relief when the last party got onto a plane and flew back home. I think I have baked, roasted, toasted, and barbecued every type of food I know how to cook. I’ve been a tour guide across the state – twice over – showing off every interesting place within a three-hour drive. Today’s summary is a toast to all of you who survived Thanksgiving – I am thankful for many things, and I am also thankful this holiday is only once a year.


Paul Ford has a thoughtful piece called I Dreamed of a Perfect Database. It nails down the evolutionary track of many of us, who have long straddled the database and software development worlds. As our needs changed there were grass-roots projects to make new types of databases – that did, well, whatever the heck we needed them to do. Cheaper and faster. More data, or maybe more types of data, or maybe a single specific type of data with functions optimized for it. There were some that performed analytics or cube analysis, and some that offered lightning fast in-memory lookups or graph data. We got self-healing, self-organizing, self-indexing clouds of data nodes, with whatever the heck we wanted sitting on top of them. When the Internet boom hit, Oracle was the database of choice. During this last cloudburst we got 250 flavors of NoSQL. But Paul’s dream is getting closer to reality. When you assemble Hadoop with a stack of add on modules, namely Apache Spark, you get pretty close. Want SQL? OK, you can have it. Or MapReduce. Deep analytics or memory-resident lookups. You want it, you can have it.

The point is that the demands on databases will always be in flux. Performance and scalability have always been core needs, but how we want to use data will continue to morph. The current generation of databases, typically based off Hadoop, are veritable Swiss Army knives, to be formed and customized as we need them. There has never been a better time to be a database developer!


If you run a bug bounty program you know there is a lot more to it than Most people consider when they start. Randy Westergren’s post on his experience with the United Airlines Bug Bounty Program offers some insight into what can happen. For example, when multiple researchers find the same critical flaw, the researchers who do not get paid can – and likely will – go public. Sure, this is bad behavior by the researcher. Your legal team can try to stop it, but you need to plan for this situation before it comes up. Second, it is amazing to me that what in-house developers consider a suitably fast release date for vulnerabilities; but it is often totally unacceptable to the research community. Both are developers by nature, but to one party three months is lightning fast. The other considers that criminally dangerous. You’ll need to set expectations going in. United Airlines was communicative, but in today’s world six months to patch is an eternity. Virtual patching techniques, API gateways, and Continuous Deployment techniques allow many organizations to deal with these issues far more quickly. A bug bounty program is a great way to leverage the community of experts to help you find flaws your in-house team might never discover, but you need to have this effort fully planned out before you start.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Other Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts

Share: