Read Zen Kishimoto’s Interview of Sherman Ikemoto

Share

Below is an interview with Sherman Ikemoto sharing his position in the technology industry.

I met and talked with Sherman Ikemoto four years ago at the mini Future Facilities (FF) conference in 2011. Before that, my understanding was that FF was a computational fluid dynamics (CFD) company focused on the data center market. It is well known that cooling takes a significant portion of energy (roughly 33%) in data center operations, so focusing on CFD made a lot of sense then and still makes sense now.

At that conference, FF announced its new product suite named 6SigmaDCX to expand their CFD focus to a comprehensive modeling of a data center with attention to other factors like IT utilization, capacity, and efficiency. After all, a complete picture of a data center cannot be obtained by paying attention to only one factor, even it is a major one.

Sherman Ikemoto, Director, Future Facilities Ltd.

The following is a summary of my discussion with Sherman, which was kept at a high level. There are numerous whitepapers and media reports on their 6SigmaDCX and ACE assessment. I plan to dig in to their technologies and write about them in later blogs.

What is FF?

Sherman explained that FF is an engineering software company that develops the 6SigmaDCX platform for design and operational planning of data centers. Specifically, 6SigmaDCX addresses the engineering challenge of supporting a dynamic IT configuration with a fixed data center infrastructure.

The data center market has a few subcategories like Data Center Infrastructure Management (DCIM), Data Center Services Optimization (DCSO), and IT Service Management (ITSM; yes, IT plays a big role in a data center). Their relationships may be described as below.

ITSM (by Wiki) is defined as

the entirety of activities – directed by policies, organized and structured in processes and supporting procedures – that are performed by an organization or part of an organization to plan, deliver, operate and control IT services offered to customers.

DCSO (by 451 research) is to:

extend the capabilities of DCIM to manage both physical and virtual assets within the datacenter and across geographically dispersed facilities. DCSO components include core DCIM features plus datacenter service management (DCIM integrated with ITSM); energy optimization (automated server power management, transactive power management, etc.); datacenter business planning; and service-based costing.

DCIM (by Wiki):

a category of solutions which were created to extend the traditional data center management function to include all of the physical assets and resources found in the Facilities and IT domains.

I asked Sherman whether FF falls into one of these categories. His answer was that the established subcategories of the data center market (ITSM, DCSO, DCIM) address the process and data gaps between the IT and facility subsystems. He added that only 6SigmaDCX addresses the engineering gap between the two subsystems (i.e., IT and OT).

He told me FF is an engineering simulation for design and operation planning for data center companies. That categorization is a mouthful, but market research companies like Gartner have yet to define a new term to describe this category. Sherman did not have a good term either. Without a short and catchy term, it is hard to promote a new category, which Sherman agreed with. Any takers for a new term? Maybe ESDOP?

Integration with other types of tools

FF does not work alone but integrates with other tools like those for DCIM. DCIM tools cover several different areas like monitoring and asset management. FF integrates with DCIM solution companies like ABBNlyteRF Code, and Nolimits.

What do FF tools do?

Their tools collectively serve as a data center planning platform that allows engineering simulation of a real data center. Because of this feature, at various points in its lifetime, “what if” scenarios can be tested before changes are physically made. Such scenarios may include what happens when a new set of IT equipment is added, when some cooling equipment is moved from one location to another, and when new space is added. In short, it is vital to know what may happen before any physical changes are made.

ACE metrics

FF proposed to consider data center operations from three pillars: availability, capacity, and efficiency (ACE). Efficiency is actually PUE, now a standard for energy efficiency in data center operations. It is a good metric for assessing data center energy efficiency. PUE can be computed automatically. But the IT and capacity perspectives are missing. According to Sherman, the engineering performance of a data center system is defined by the ACE score.

In the past, a few metrics have been proposed for considering IT utilization and efficiency as part of data center energy efficiency. A data center exists to provide services that are fueled with IT equipment. For this, one metric I can think of is Corporate Average Data center Efficiency (CADE).

Simple IT utilization is straightforward because a modern server comes with a feature that measures its own utilization. However, IT efficiency that incorporates the notion of “useful work,” as defined in the data center energy productivity metric (DCeP) by The Green Grid (TGG), is harder to deal with. It is important to assess if your servers are running to produce “useful work.” This is because you can run a lot of unnecessary applications on many servers with high utilization but with little “useful work.” There is not yet a consensus on a good estimation of “useful work,” so staying with simple utilization may be a good idea for now.

The capacity side of the data center operation is rather complex. Because of the cooling consideration, some space may not be used for equipment, even though space is physically available; this leads to stranded capacity. FF creates a 3D model of the data center and conducts engineering simulations. This process, according to Sherman, needs human intervention at this point.

What is a good metric?

Many metrics have been proposed before, including one from Japan called data center performance per energy (DPPE). TGG is adding new metrics for factors like carbon and water. But as far as I know, PUE is the only one that is used widely and internationally. A good metric should be easy to understand, be measured automatically, and consider relevant factors. There may be more requirements, but these seem reasonable. The first two requirements may contradict the last one. Simple and easy metrics usually do not consider many factors in their computation. But without enough factors considered, does the metric make sense?

There must be a good balance among those three factors. And FF’s ACE seems to be a good compromise to advance PUE to the next stage for assessing data center efficiency.

Market segments

I asked Sherman what market segments are keen on FF`s solutions. He mentioned that companies that are based on IT infrastructures tend to like their solutions. Those are financial businesses like banks, security companies, and real estate companies. I am skeptical in nature and asked Sherman if their solutions were actually applied successfully. He showed me awhitepaper. CBRE is an international real estate company, and they saved a lot of money. Sherman added two market segments: colocations and cloud segments.

Summary

As everything is getting connected (Internet of Everything), the data center’s role is becoming even more important. Data centers are to blame for consuming too many resources like power and water. But at the same time, they are fueling the current information age and are indispensable. And it is vital to have energy efficient data centers. Without a good metric for efficiency, we cannot develop and maintain data centers.

Read the whole article on Tek-Tips Forums.