Energy Logic 2.0 New Strategies for Cutting Data Center Energy Costs and Boosting Capacity
We respect your privacy, by submitting this form you agree to having your details passed onto the sponsor who may promote similar products and services related to your area of interest. For further information on how we process and monitor your personal data click here.
When Energy Logic was introduced in 2007, data center energy efficiency was just emerging as a serious issue. Increases in data center density and capacity were driving up energy bills while concerns over global warming had spurred a U.S. EPA report on data center energy consumption. The industry responded with a number of tactical approaches, but no cohesive strategy for optimizing efficiency.
Energy Logic filled that gap. It bucked the conventional wisdom of the time, which focused on the energy usage of data center support systems, most notably cooling, while virtually ignoring the efficiency of the IT systems that consume more than half of data center energy and drive the demand for cooling and other support systems. (This oversight is perpetuated by the current reliance on PUE, which is discussed later in this paper in the section PUE Analysis.)
Instead, Energy Logic took an "inside-out" approach that drives improvement in IT efficiency as well as the efficiency of support systems. Through this more strategic approach, Energy Logic was able to leverage the cascade effect that occurs when lower energy consumption at the component and device level is magnified by reducing demand on support systems. Savings from the original Energy Logic, introduced in 2007, were calculated by constructing a detailed statistical model of a 5,000 square foot (464.5 square meter) data center housing 210 server racks with an average rack density of 2.8 kW. Each Energy Logic strategy was then applied to the model to calculate the impact on energy consumption.
TO READ THE FULL STORY
Please note: That all fields marked with an asterisk (*) are required.