Sign up to get full access to our latest articles, reports, videos and events delivered by military and industry experts and decision makers.

Pooling the Resources: The Dynamics of Data Center Consolidation

Add bookmark
Bill Kleyman
05/01/2013

The Federal Data Center Consolidation Initiative was enacted in 2010 with the aim of closing 40 percent of the government’s 2,900 data centers by 2015, saving up to $5 billion in the process. Here data center expert Bill Kleyman examines the keys to success for the government in hitting its goals, along with the challenges facing such a task.

Data center consolidation is never an overall easy task. We're not just consolidating servers -- we're also recreating workloads, applications, and other physical components.

The traditional design revolved around a one-to-one mentality where applications and services were dedicated to one machine.

Now, with virtualization and private cloud technologies; it's a complete paradigm shift as far as server deployment and resource sharing.

Related: Consolidating the Future: Challenges Facing Federal Data Centers

With so many data centers - each potentially serving a unique and important process - one of the first most important steps will be the planning process.

This may sound generic, but completely understanding what each rack and server host will be the only way to efficiently move that workload to a consolidated system.

Simply building the consolidated infrastructure first and then planning the migration around that will likely result in poor resource utilization or even allocation.

Creating an environment where more services are run within a data center requires a good amount of architecture around resources.

Related: Keys to Consolidation: Achieving Data Center Excellence

This will range from optimal cooling, power efficiency, and rack management to the types of systems deployed within the racks. With nearly 1200 data centers to work with, project managers and data center engineers need to have a well documented understanding of the following:

1. What is currently being hosted in the data center and what resources are being used?

2. Does it makes sense to migrate the workload via a physical-to-virtual method? (If yes - how difficult is the migration? If no - what is the workload upgrade/migration path?)

3. What resources are required within the new data center to support consolidated, high-density functions?

The move by 2015 is quite possible and even feasible. The challenge will be to efficiently consolidate 1,186 data centers in to significantly fewer. Remember, the goal here isn't only to consolidate and shut down inefficient data centers. Program managers also want to reduce management costs and build a platform that can scale for the future.

The hardware that goes into a data center is very similar between both private and government sectors. Each will require racks, cooling, power, and security.

[inlinead]

From there - the biggest difference is the workload that these data centers carry. One of they main challenges faced by numerous organizations is the problem around sizing.

When consolidating systems, administrators need to deploy more efficient and agile platforms. This means that high-density computing with intelligent multi-tenancy capabilities needs to be used.

The modern data center is heavily virtualized and has numerous different virtualization technologies functioning. Now, there is application, desktop, and even security virtualization.

That means that the traditional approach of assigning one machine to one resource can't really apply here any longer.

For example, VDI will require more resources than simply publishing and application. Or, a remote access server may need more resources than a virtual server used for licensing.

At a management level, administrators must understand where and how resources are being allocated. This means the assignment of appropriate technologies as they relate to the IT function.

In the case of VDI, you may need to assign a converged storage system that's capable of offloading heavy IOPS requirements onto an SSD array or flash.

Remember, resources are always finite and often very expensive. So with consolidation there needs to be a very clear understanding around the workload that's being consolidated.

Improper resource provisioning can be extremely costly. If an entire blade chassis is dedicated to virtual desktops - and it's undersized - there will be serious performance and productivity issues.

Whether you've consolidating five data centers or 1,200, the underlying systems need to be planned out and sized very well. When both data center and infrastructure sizing is done right, administrators have the ability to support users and still maintain a consolidated platform.

Many of the issues raised in this article will be discussed at IDGA's Data Center Consolidation summit, later this month. For full details, go to www.DCCEvent.com


RECOMMENDED