Putting all your eggs in one basket is always a little disconcerting. Anyone who works with risk is always wary of reducing options. So I am never surprised when clients ask about alternative cloud providers and try to design cloud-agnostic applications.

Personally I take a different view. Designing cloud-agnostic applications is like building an entirely self-sufficient home because you don’t want to be locked into the local utilities, weather conditions, or environment. Sure, you could try, but the tradeoffs would be immense. Especially cost. The key for any such project is to understand the risk of lock-in, and then select appropriate techniques to minimize the risk while still providing the most benefit from the platform you are using.

The only way to really get the cost savings and performance advantages of the cloud is to design specifically for the cloud you are working on. For example use their load balancers and auto scale groups rather than designing your own. (Don’t worry, I’ll get to containers in a second). If you are building or bringing all your own software to the cloud platform, at a certain point, why move to the cloud at all? Practically speaking you will likely reduce your agility, resiliency, and economic benefits.

I am talking in generic terms, but I have designed and reviewed some of these deployments so this isn’t just analyst handwaving. For example one common scenario is data transfer for batch analysis. The cloud-agnostic way is to set up a file server at your cloud provider, SFTP the data in, and then send that off to analysis servers. The file server becomes a major weak point (if it goes down, so does everything), and it likely uses the the cloud provider’s most expensive storage (volumes). And all the analysis servers probably need to be running all the time (the file server certainly does), also racking up charges.

The cloud-native approach is to transfer the data directly to object storage (e.g., Amazon S3) which is typically the cheapest storage option and highly resilient. Amazon even has an option to transfer that data into its ridiculously cheap Glacier long-term storage when you are done. Then you can use a tool like Lambda to launch analysis servers (using spot instance pricing, which can shave off another 40% or more) and link everything together with a cloud message queue, where you only pay when you actually pump data through.

Everything spins up when data appears and shuts down when it’s finished; you can load as many simultaneous jobs as you want but still pay nearly nothing when you have no active jobs.

That’s only one example.

But I get it – sometimes you really do need to plan for at least some degree of portability. Here’s my personal approach.

I tend to go all-in on native cloud features (these days almost always on AWS). I design apps using everything Amazon offers, including SQS, SNS, KMS, Aurora, DynamoDB, etc. However…

My core application logic is nearly always self-contained, and I make sure I understand the dependency points. Take my data processing example: the actual processing logic is cloud-agnostic. Only the file transfer and event-driven mechanisms aren’t. Worst case, I could transfer to another service. Yes, there would be overhead, but no more than designing for and running on multiple providers. Even if I used native data analysis services, I’d just ensure I’m good at documenting my logic and code so I could redo it someplace else if needed.

But what about containers? In some cases they really can help with portability, but even when using containers you will likely still lock into certain of your cloud provider’s proprietary features. For example it’s just about suicidal to run your database inside containers. And containers need to run on top of something anyway. And certain capabilities simply work better in your provider than in a container.

Be smart in your design. Know your lock-in points. Have plans to move if you need to. Micro or mini services is a great design pattern for knowing your dependency points. But in the end if you aren’t using nearly every little tweak your cloud provider offers, you are probably spending more, more prone to breakage, and slower than the competition who does.

I can’t move my house, but as long as I hit a certain square footage, my furniture fits just fine.

Share: