, , , , ,

The final post in the series on cloud Best Practices ties everything together.  It’s great that the cloud gives tremendous flexibility, that you’ve planned for scalability, performance, brought a level of automation to deploying new servers and done proper capacity planning.  The umbrella that all of this lives under is:

Best Practice 5: Managing Environments

Cloud environments enable some very enticing possibilities such as more variable infrastructure investment, ability to avoid long-term lock-in to given servers and the ability to create temporary environments as needed without huge overhead costs.  That said, the key factor in getting the most out of the cloud is proper analysis and planning. By following the best practices outlined in the first four parts of this series, you’ll have more than a few bases covered.

This best practice is focused on environments, which I’ve pluralized intentionally.  The topics to-date have focused more on production environments since that’s the touchpoint for your customers.  Other necessary environments for any software development project include Development, Test, Staging and perhaps UAT.  Depending upon unique needs, at least two of those environments are necessary and perhaps all four, or maybe even more given unique needs.

Development and test environments are needed throughout the development cycle and can be easily deployed on the cloud.  This is a great use of cloud technology as the environments can be easily resized (e.g., moving the database server from an 8 core machine to a 16 core machine) and reconfigured as needed.

Staging and UAT environments may only be needed periodically as development is between code freeze and deployment.  This is another great opportunity for taking advantage of the cloud’s inherent pay-for-use model.  Using a predesigned configuration, or a tool like Chef, Staging, UAT and similar environments can be provisioned according to schedule so that valuable hardware doesn’t sit idle.

One additional use for the cloud is to test integration with legacy and third-party systems without affecting other parallel efforts in development.  By provisioning a temporary development server that a few developers can work on to test a tricky integration, you can isolate issues caused by early integration work and save core development from some potential disruption.  This wasn’t always an option in traditional environments and it may not always be necessary, but it’s nice to have the option available at — literally — a moment’s notice.

While it’s been said that every cloud has a silver lining, it predisposes that the silver lining is something found in spite of the cloud.   Without proper planning and preparation, the cloud is likely to represent more of a thunderstorm into which you’ve inadvertently flown a small aircraft.  Not a lot of fun.  When you don’t properly plan for the cloud, you’ll discover quickly that responses to changes in demand amount to very quick decisions to do the wrong thing.  It would be as if you’ve never driven a Formula 1 race car before, but it’s decided to compete without the benefit of a proper pit crew and without practice laps.

I hope you’ve enjoyed this series on best practices for deploying eCommerce solutions in the cloud.  Possibilities and capabilities available in the cloud change quickly so I’ll continue to blog on cloud-based topics in the future based on your comments and any thoughts and questions you have.  Always feel free to contact me to continue the discussion!

To see the original complete presentation given with AWS, view the webinar here and tell me what you think in the comments below.