Infrastructure Automation Benefits — Myth or Meaningful?
Automation tools can help with capacity planning and data center management.
November 27, 2019
By Mark Gaydos
Mark Gaydos
There isn ’t any such thing as “economy of scale” when data centers are concerned. Bigger is always more complex and the public cloud cost just shifts from a capital expenditure (capex) to an operating expense (opex). Big data center “efficiency” is another tall tale, as an IDC survey of 400 data center IT and facilities professionals finds that 35% of them have a PUE of 2.8 or higher. The power usage effectiveness (PUE) ratio measures how efficiently a computer data center uses energy. That’s quite interesting as big operators such as hyperscale cloud companies and large colocation providers claim a PUE between 1.1 and 1.4.
Another phenomenon uncovered by the same IDC survey is that most data center operators (about 35%) are hiring additional IT staff to deal with the onslaught of growing equipment and application challenges. Sound familiar? It’s the same 1990s philosophy of adding more hardware to address shrinking bandwidth, all over again. It was better application awareness and network management visibility that quelled the flow of unneeded switches and routers.
However, IDC has also uncovered an interesting stat: approximately 28% of those 400 professionals surveyed are investing in automation tools to address the equipment and application challenges. This is more in line with the “awareness” cure for the decade of hardware purchasing glut. Tools such as data center information management (DCIM) and technology asset management (TAM) are automation tools that are improving:
Visibility into data center operations and costs.
Efficiency by making better use of equipment already owned.
The ability to plan for capacity needs.
A stat of 28% investing in automation tools sounds encouraging. But, another survey on technology asset management from Nlyte Software discovered that out of 1,500 technology asset decision-makers, almost all of them (96%) view hardware and software asset control as a top-five priority for their business, which isn’t surprising. However, what is surprising is that almost one third (31%) of those enterprise companies responding are still attempting to track their assets manually. That’s not only surprising, but absurd!
Real-time information is important because fresh data is required by all parts of the organization; service management wants to know the configuration of the hardware in the operating systems; facilities need to know how many servers there are to properly power the racks, as well as the mergers and acquisitions department needing data on exactly what IT assets the company is acquiring. Spreadsheets become outdated as soon as the manually gathered, static information is placed into the cells.
All the applications depend on a stable and secure physical infrastructure. Whether this is located on-premises, in colocation or “edge” facilities, managers must be certain these resources aren’t compromised, either intentionally by outside threats, or accidentally by employees. The fact is that resources are often compromised when IT staff make unplanned and/or unrecorded changes to assets, e.g., human error. Employees may make well-intentioned modifications such as adding or removing assets including servers or blades without approval or without recording the information centrally.
Simply put; a centralized system is…
…needed to manage a large number of distributed infrastructure resources. Managing the ever-expanding network edge or various colocations is a top benefit of DCIM. DCIM tools have the ability to collect and analyze data for power and cooling maintenance, which will become necessary because traveling to remote edge data centers will not be a daily IT chore.
In addition to the aforementioned issues, if the network isn’t properly scrutinized with a high level of granularity, operating costs will rapidly increase because it will become more difficult to obtain a clear understanding of all the hardware and software pieces that are now sprawled to the computing edge. Managers will always be held accountable for all devices and software running on the network, no matter where they’re located. Those managers who are savvy enough to deploy a DCIM or TAM solution will avoid many hardware and software problems with the ability to collect more in-depth information. With more data collected, these managers now have that single source of truth — for the entire network — to better manage security, compliance and software licensing.
So when it comes to the data center’s economy of scale, there’s no proportionate saving in costs gained by an increased level of production without the addition of tools to help automate the hardware and software discovery process. Industry surveys are showing increasing percentages of IT and facilities managers turning to automation tools, but not as fast as networks are increasing in size and complexity. These solutions ease the process of discovering and tracking network-connected items, reduce downtime due to human error and avoid regulatory and compliance issues as well as help managers accurately plan for capacity needs.
Mark Gaydos is chief marketing officer for Nlyte Software, a leading data center infrastructure management (DCIM) solution provider. He brings more than 20 years of software marketing experience to the role and has an MBA in management science from San Diego State University and a bachelor’s in economics from the University of California, Santa Barbara. Follow him on LinkedIn or @nlyte on Twitter.
You May Also Like