Opinion piece issues a series of best practices that highlight how real-time monitoring can lower data center power usage and TCO. From the column:
How Real-Time Monitoring Lowers Data Center Power and Operating Costs
Christine Burke is Senior Vice President of Worldwide Sales for RF Code.
Data centers cost a fortune to build and operate. Building them can cost about $7 million per megawatt depending on where the center is located. Power, which accounts for 70-80 percent of ongoing operational costs, averages $0.0658 per kWh in the US, with prices ranging from a high in Massachusetts of $0.1475 to a low of $0.0473 in Montana. Add to that growing demand, thanks to things like blockchain, artificial intelligence, and the Internet of Things, that increases cost by at least 10 percent each year, according to Gartner. And that’s when everything is working as planned and doesn’t consider operational efficiency or what happens when equipment is damaged or needs replacement, the costs of associated downtime, repairs, or even the impact of lost data or lost business.
The C-suite cares about the bottom line which is no surprise with this level of financial investment. When they think about data centers there’s concern about keeping costs in check while ensuring that the data center provides the organization with real value. As the statistic above indicates, there is one key area all companies need to consider when thinking about costs – use of energy to power the center and cool the equipment. Add to that anything that can negatively impact operations, from end-of-life decommissioning, accidents, leaks or environmental issues, or even inefficiency in power utilization because of overcooling. These drain resources and add to operational costs, making many executives nervous about data centers as overhead.
Today’s data centers rarely look the same for long. They are hives of activity – with updating, changing, and removing equipment as computing needs evolve and storage and server technology continues to improve. Each time a piece of equipment is touched, whether repaired, moved, or upgraded, it can change the data center’s cost structure. This makes it impossible to keep up by using manual processes to inventory assets and evaluate operational status.
While it’s not possible to eliminate these costs for any organization that owns or operates a data center, it is possible to optimize the center to be as efficient as possible so the organization pays the least amount in energy costs, removes or replaces assets before they fail or become obsolete, and addresses any problems with equipment or environment immediately, before there is a chance to damage equipment or cause more trouble. Otherwise, costs can spiral out of control.
Best Practice: Real-time Monitoring
The best practice to keep a handle on costs, track assets, and achieve this optimization is to monitor equipment in real time, throughout its lifetime, from deployment to operations and maintenance to the moment the asset is decommissioned. This controls costs, ensures optimal performance of equipment and other resources, and for many companies, compliance with relevant standards such as the PCI data security, the Health Insurance Portability and Accountability Act (HIPAA), or Service Organization Controls (SOC) to ensure security and availability of the data.
Typical data center costs impacted by real time monitoring include:
- Energy use for temperature and humidity control
- Purchase of new equipment to replace obsolete assets
- Purchase of new equipment to replace equipment damaged by leaks, failure, or changes in environmental conditions
- Downtime caused by nonworking equipment
- Purchase of new equipment to expand capacity when existing equipment is not optimized and fully utilized
- Real-time monitoring gives data center managers the ability to understand each of these issues and how they impact operations and cost.
As the biggest percentage of operating costs, energy use is a key area where organizations can optimize to cut costs. Monitoring things like power use and fan speeds will help data center managers understand how the center operates and where to make adjustments. Real-time updates about environmental conditions enable identification of potential issues before they escalate, like fire, overheating of equipment, or water or coolant leaks. And there is no more need to overcool the data center for fear of damage from overheating; with real-time sensors keeping track of the temperature and equipment performance, data center operators are notified immediately if there are drastic temperature changes.
Monitoring equipment in real time enables companies to plan for the lifecycle of their assets, including removing or replacing equipment before it becomes obsolete and to plan for switching to minimize operational disruptions and additional labor costs. While equipment gets more powerful and more efficient, new equipment does not always reduce power consumption, so it’s critical to understand how the equipment in the data center, new and old, impacts power and cooling demands. Another key money saver when it comes to equipment is the ability that real-time monitoring gives companies to identify assets in danger before they are damaged and cause more significant problems (like downtime, lost data, and the need to repair or replace equipment).
Downtime is a significant cost for companies and in some circumstances can lead to loss of data or customers. Replacing equipment in a timely manner and addressing potential threats are the key ways to prevent unexpected downtime.
Many data centers are not aware of the efficiency or utilization rate of their current equipment, which can be trouble when there is a need to expand. Rather than optimizing existing equipment and getting maximum value from their investments, companies purchase new equipment to expand capacity, adding to space usage and equipment and power costs. Real-time monitoring and understanding of existing equipment and utilization reduces or eliminates this kind of overspending.
Real-time monitoring matters because it automates what is a clunky, time consuming, often inaccurate, and quickly outdated manual process. It enables companies to monitor the exact location, age, performance, maintenance history, and condition of data center assets, as well as gain critical insight into environmental conditions that impact operations.
The end result is financial savings – from reduced power, lower storage costs, more efficient use of energy resources, more efficient use of data center staff, and the ability to identify and address costly potential problems when or before they occur.
Whether an organization is managing data in-house, in different buildings within a complex or different cities, states, or countries, or using private cloud or colocation to reduce costs, they must manage their data centers to ensure they are running efficiently in terms of costs and resources.
So much of an organization’s business resides in the data center. To protect the company and its investment, it is critical to have real-time, continuous inspection and analysis of data center assets to see both the details and the big picture. With the high price tag of running a data center, any costs savings can give companies a real competitive advantage.
Find the full Industry Perspective at Data Center Knowledge.