It was not long ago when there was a pervasive feeling that everything was moving to the Cloud, an unstoppable, inevitable flow of on-premise IT capacity towards huge, highly centralized service provider data centers. But for many reasons, this hasn’t happened. One big reason has to do with the growing need for compute capacity at the edge of networks, closer to users, be it humans or other machines. Why is this needed? Sending data over a network takes time and costs money…enough so that it is making more sense to deploy compute and storage at the edge rather than having all that data move to and from the cloud or some other remote data center.
The “Internet of Things” (IoT) phenomena itself is a big driver for needing edge computing capacity as more and more connected devices are generating and collecting data. With the cost of sensors and network connectivity being so low, the volume of data collected by local devices has exploded. Combine that with advances made in data analytics, automation, and artificial intelligence, and this data has become very valuable. The increasing value of device data is driving this IoT world where everything is connected and continuously collecting data. It’s now making more financial sense to process, clean, and store, at least, some of this data locally than it would be to send it all to a distant data center. Regardless of sensitivity to latency and cost, the fundamental driver for having compute at the edge of the network really comes down to the desire to benefit from digitization. Adding compute and network connectivity to every “thing” and to virtually every aspect of our society is dramatically impacting society’s productivity, efficiency, and wellbeing. Compute everywhere is our future.
Use cases for edge computing deployments has exploded as a result. The digitization of industrial processes and manufacturing is certainly a key use case. Brick and mortar retail’s deployment of local IT for providing in-store, immersive digital experiences is another. Deployment of 5G mobile networks, however, may have the biggest impact on the growth of the edge computing market. 5G offers the promise of sub-millisecond latency, a speed necessary to make many of the world’s tech dreams a reality such as fully autonomous vehicles, robotic surgeries, virtual/augmented reality, and the real-time management of distributed energy sources. 5G will enable a world of incredibly high data speeds for huge numbers of users all while improving reliability and security in an energy efficient manner. 5G’s communication architecture requires the deployment of hundreds of thousands of mini communication clouds and antennae to make all of this come to fruition. So not only will 5G networks help drive the larger edge computing market by enabling edge applications to do even more, the deployment of 5G itself will be a significant edge computing application driving the overall market.
This “compute everywhere” trend has led us to a hybrid computing architecture where more and more organizations’ IT assets and data are spread across large centralized data centers, smaller regional data centers and very small local edge sites. This highly distributed environment creates challenges for those deploying and managing the IT infrastructure. And this complexity is exacerbated when you consider that each of the local edge sites require high availability to ensure uninterrupted operations and service. As IoT technologies and edge computing applications become a more integral part of the day-to-day business and/or customer experience, the edge IT infrastructure that houses the associated distributed IT equipment must be robust. The role of IT is no longer viewed as a cost center, rather it is tightly connected to the business strategy and to profit as a value creator, making resiliency even more imperative.
There are two unique attributes of local edge environments, in contrast with regional edge or centralized data centers, that make it challenging to achieve the necessary resiliency: (1) lack of IT and/or facilities staff on-site, and (2) having many sites, geographically dispersed. These two things create issues such as:
Software management is a critical part of the solution to solve edge challenges. The tools are necessary for giving visibility and control from afar. New cloud-based software suites have emerged that offer open APIs and take advantage of cloud, IoT, data analytics, and artificial intelligence technologies. These new tools are what connect members of the ecosystem together to the operations phase of edge IT deployments. These new capabilities along with the MSPs who employ them essentially augment staffing for the end user by providing remote visibility and proactive control over all edge IT assets.
To paint the picture in very broad strokes, the ecosystem works together to simplify the design and deployment phases while providing both a physical and virtual workforce to ease management and maintenance burdens. By working together, edge computing owners and operators will be capable of not just surviving in our new complex world of “compute everywhere” but should be well-positioned to thrive in whatever ways they serve their customers. To learn more about mitigating edge challenges through this integrated ecosystem, download our white paper “Integration isn’t just for the micro data center”.
Your UPS is like your car: It requires scheduled and consistent maintenance in order to run at its optimal potential! Learn more about the UPS LifeCycle and the steps you can take to prolong the life of your UPS in this infographic. Contact Us today!
It’s clear that lithium ion (Li-ion) batteries stand poised to deliver some dramatic changes to the field of data center uninterruptible power supplies (UPSs), mainly due to their longer lifetime along with reduced weight, footprint and cooling requirements compared to lead-acid batteries that are commonly used in UPSs today. In this post, I’ll try to paint a picture of just how dramatic that change might be in small to medium-size data centers (and, in a future post, I’ll discuss potential impacts for facility-scale UPSs).
For starters, valve-regulated lead-acid batteries (VRLA) take up significant space. This is one of the reasons why large, or even medium-size companies, typically don’t place them in the IT “white room.” What’s more, many organizations in recent years have been raising the temperature of their data center server rooms to save on cooling costs, keeping in line with guidance from organizations such as ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers). IT equipment and UPSs can tolerate the higher temperatures just fine. VRLA batteries, on the other hand, will age and die prematurely at those higher temperatures.
For all of these reasons, companies tend to create battery rooms specifically to house their VRLA batteries. Li-ion technology promises to enable a dramatic reduction in the size of these rooms, by a factor of 2 to 3, simply because Li-ion batteries pack so much more energy into a much smaller footprint. This will increase the footprint available for IT space while also reducing cooling requirements, which saves on both capital costs and ongoing operating costs.
What’s more, in some instances Li-ion batteries may obviate the need for separate battery rooms altogether, by enabling batteries to be installed in the IT room along with the UPS. This is especially likely for small- to medium-size data centers. The strategy frees up useful space, simplifies installation and positions the UPS and associated batteries closer to the IT load, which provides better protection from any potential upstream electrical issues.
Similarly, companies that use scalable and adaptable integrated data center architectures such as Schneider Electric InfraStruxure, could benefit further from Li-ion technology. With such architectures, IT racks, power and cooling components are built and tested as part of an integrated data center solution which can then be expanded as necessary over time. Li-ion batteries will make these integrated “pods” even more space- and energy-efficient than they already are, while delivering the same benefits associated with having the UPS/battery combination close to the IT loads they protect, and easily scalable to keep up with data center growth.
In addition to reducing space and energy requirements, Li-ion batteries last twice as long and require less maintenance than their VRLA counterparts. They also come with advanced battery monitoring systems (BMS), giving IT groups easy, remote access to a reliable measure of the “state of health” and “state of charge” for their batteries. And with less maintenance to perform, that means fewer non-IT people need to be in the data center, which addresses a constant concern for IT groups.
I expect we’ll also see great improvement in smaller, single-cabinet UPSs thanks to the combination of higher density power electronics and Li-ion batteries. A cabinet that today supports about 60 kVA with 10 min of energy storage with VRLA batteries, may one day protect 150-200 kVA with that same 10 min. of storage using Li-ion batteries, effectively more than doubling its power density. Such density improvement should substantially change the old knock against UPSs, that they’re “necessary but bulky.”
With this paradigm shift, it is also not difficult to imagine more power protection integrated right into IT racks, because it will take up far less space and require dramatically less frequent maintenance.
These are just a few of the ways I expect Li-ion batteries will change the 3-phase UPS landscape in data centers in coming years. I’d love to hear your take on the topic, so feel free to let me know using the comments below. And keep an eye out for my next post on what Li-ion technology will mean for large data centers and facility-scale UPSs.
Brisk Worldwide is Your #1 Technology, Power & Lighting Resource!