Welcome to part four of the series on Best Practices in deploying eCommerce solutions to the cloud. In the prior three posts, I’ve discussed Horizontal vs. Vertical Scalability, Performance as a Business Requirement, and Automating Scalability. This post will focus on the importance of proper planning for system capacity.
Best Practice 4: Capacity Planning
In the days when cloud enablement was not available, capacity planning was done to make sure you could respond to expected peak demand without investing in too much overhead. A fine line existed between erring on the side of savings or additional system capacity. While the cloud eases the burden of capacity planning, it certainly doesn’t remove this step from the planning process.
Deploying in the cloud buys the luxury of being able to scale up and down based on demand and only pay for the computing power that’s needed. That’s a great luxury compared to the recent past. However, when you’re planning on deploying in the cloud you still have to understand each layer, each component and how the environment will actually scale up or down. In order to be able to take advantage of the scalability of the cloud you have to be able to respond quickly, otherwise, you might as well deploy in a traditional colocated facility. Asking eight questions will help prepare for quickly scaling you cloud-based infrastructure to respond to demand:
- What are the layers / tiers of the applications?
- Can a configuration be prepared to scale at each layer?
- What manual configuration, if any, is required to scale?
- Is it possible to automate deployment of additional architectural components in each layer with a tool such as Chef?
- What are the key metrics that should be monitored for each component?
- What are the times of peak demand?
- What is demand on average?
- Has the marketing plan been shared with IT to help predict periods of increased demand?
Once you know the answers to that minimal set of questions, I think it’s a good idea to err on the side of caution and deploy for a bit of extra capacity. Why? Because it acts as a shock absorber when unplanned demand occurs and buys a little time to respond without risking the ill effects of poor response time. (Part two of this series, there is some good data there on the damage poor response time can do, if you need to refer back.)
I’ve had several inquiries over the past few years about how to leverage the cloud if an application is currently hosted in a traditional environment. There are several ways to take advantage of the inherent flexibility of the cloud in parallel with an existing environment:
- Offload portions of your application to cloud-based services and repurpose existing infrastructure to focus on core services. For example, if the application server tier of an existing environment is integrated with company infrastructure through a VPN, move your web server tier to the cloud and repurpose the newly available servers to application servers in order to better respond to more demand.
- Create test environments on demand for performance testing, or for environments that support User Acceptance Testing.
Above all, the most important thing is planning ahead. I’ve heard people mention that capacity planning “isn’t necessary” when deploying to cloud, but that’s only true if you don’t care about estimating the minimum required environment and how to scale from there. It also happens to be necessary if you care about being able to estimate cost, which I’m sure the average CFO might care about.
The cloud makes a great deal more flexibility, scalability and efficiency possible, but it still needs to be managed effectively. So when you’ve got the cloud in mind, make sure your head’s not in the clouds and take the steps to prepare just as if you were planning a traditional infrastructure environment. Your CIO, CFO — and more importantly, the customer — will thank you for it.
To see the original complete presentation given with AWS, view the webinar here and tell me what you think in the comments below.