Press "Enter" to skip to content

Industry Says Not So Fast to OMB’s Proposed Data Center Policy Changes

The first weekday of the shutdown, Dec. 26, was also the deadline for interested parties to comment on the administration’s draft Data Center Optimization Initiative policy update.

The draft policy released in November extends restrictions on building new or expanding current federally-owned data centers and tweaks the metrics by which agencies’ closure and optimization efforts are measured.

“Much of the ‘low-hanging’ fruit of easily consolidated infrastructure has been picked, and to realize further efficiencies will require continued investment to address the more complex areas where savings is achievable,” Federal Chief Information Officer Suzette Kent wrote in the draft, establishing a focus on finding savings by optimizing the efficiency of current environments over closures.

The draft still calls for agencies to continue reducing the number of data centers they manage, but those efforts will be targeted to areas with the biggest potential impact.

The Professional Services Counsel—one of six organizations to submit comments on the draft—argued against this pivot and suggested moving data center closures back to the top of the priority list.

“The updated DCOI policy should not be a simple revision that anticipates only diminishing returns from this decade-long IT rationalization effort,” PSC Executive Vice President and Counsel Alan Chvotkin said. “Congressional committee and GAO oversight documents … indicate that many agencies can still realize significant cost savings and avoidance through data center consolidation. While there is a role for government-owned, on-premises data centers, agencies should first consider ways to leverage vendors’ commercial capabilities.”

Alla Seiffert, director of cloud policy and counsel for the Internet Association, agreed the policy “should continue to set a clear, well-defined north star for data center closures for agencies.”

She also honed in on the idea of “cost savings” as a primary concern, suggesting instead that OMB include non-cost factors such as cybersecurity.

Commenters from the Information Technology Industry Council, or ITI, agreed, offering that OMB should change the metric to “Total Cost of Acquisition” to measure direct and indirect costs, such as security, scalability, resiliency and overall efficiency.

“Focusing on the TCA recognizes that processes are tools to an end, not an end in themselves,” they wrote. “Thus, it allows the government to assess whether the benefit of any process is worth the cost. Any process not mandated by law that increases the TCA should be avoided.”

ITI representatives also warned against eliminating the “Power Usage Effectiveness” metric, which is on the chopping block for its inconsistency.

“While it is true that geographic considerations impact PUE values, it remains a valid tool for measuring system efficiency,” ITI wrote, noting power costs differ for distinct regions. “Further, it can be used for easy comparison of facilities within a geographic region, e.g. Austin Texas, the pacific northwest, etc. Reducing PUE will enable agencies to meet operational effectiveness mission goals. … Discarding PUE entirely would be contrary to the intent of improving energy efficiency.”

While ITI argued in favor of keeping a waning metric, PSC suggested adding another: storage utilization.

“Such a metric could complement virtualization and server utilization metrics, for example, by adding a measuring for storage density, or how fast applications can read or write data given a fixed volume of data,” Chvotkin said.

Software vendor ScienceLogic also suggested a significant change to the metrics, favoring a more granular look at “business service” over pure “data center” availability.

“ScienceLogic currently uses the construct of a ‘business service’ availability metric to differentiate from individual [IT service management], device, network or security availability. These individual metrics combine to form a composite metric for the availability for a facility, under the umbrella of overall data center availability, which can be considered as the key ‘business service’ being measured,” they explained.

By looking at the availability of the various components of a data center—power, servers, networks, devices, etc.—individual points of failure can be called out and agencies can make appropriate adjustments where they are needed.

“Although facility availability can be measured as a singular metric, the inclination to measure availability based on any single point of failure, or any single metric, could prove considerably inaccurate for a facility,” ScienceLogic representatives wrote.

While the various commenters highlighted different concerns, most cited the incomplete Cloud Smart policy—the update to the previous administration’s Cloud First policy—as a bottleneck for the data center policy.

The comments are publicly available on OMB’s GitHub page. However, federal officials won’t be able to review that input or make changes to the draft policy until OMB gets funding for this fiscal year.

source: NextGov