Be taught extra about how we’re making progress in direction of our sustainability commitments by means of the Sustainable by design weblog collection, beginning with Sustainable by design: Advancing the sustainability of AI.
Earlier this summer season, my colleague Noelle Walsh revealed a weblog detailing how we’re working to preserve water in our datacenter operations: Sustainable by design: Remodeling datacenter water effectivity, as a part of our dedication to our sustainability objectives of changing into carbon damaging, water constructive, zero waste, and defending biodiversity.
At Microsoft, we design, construct, and function cloud computing infrastructure spanning the entire stack, from datacenters to servers to customized silicon. This creates distinctive alternatives for orchestrating how the weather work collectively to reinforce each efficiency and effectivity. We think about the work to optimize energy and power effectivity a vital path to assembly our pledge to be carbon damaging by 2030, alongside our work to advance carbon-free electrical energy and carbon elimination.
Discover how we’re advancing the sustainability of AI
Discover our three areas of focus
The speedy development in demand for AI innovation to gas the following frontiers of discovery has supplied us with a possibility to revamp our infrastructure methods, from datacenters to servers to silicon, with effectivity and sustainability on the forefront. Along with sourcing carbon-free electrical energy, we’re innovating at each degree of the stack to scale back the power depth and energy necessities of cloud and AI workloads. Even earlier than the electrons enter our datacenters, our groups are targeted on how we will maximize the compute energy we will generate from every kilowatt-hour (kWh) of electrical energy.
On this weblog, I’d prefer to share some examples of how we’re advancing the ability and power effectivity of AI. This features a whole-systems strategy to effectivity and making use of AI, particularly machine studying, to the administration of cloud and AI workloads.
Driving effectivity from datacenters to servers to silicon
Maximizing {hardware} utilization by means of sensible workload administration
True to our roots as a software program firm, one of many methods we drive energy effectivity inside our datacenters is thru software program that permits workload scheduling in actual time, so we will maximize the utilization of current {hardware} to satisfy cloud service demand. For instance, we would see larger demand when persons are beginning their workday in a single a part of the world, and decrease demand throughout the globe the place others are winding down for the night. In lots of circumstances, we will align availability for inside useful resource wants, reminiscent of working AI coaching workloads throughout off-peak hours, utilizing current {hardware} that might in any other case be idle throughout that timeframe. This additionally helps us enhance energy utilization.
We use the ability of software program to drive power effectivity at each degree of the infrastructure stack, from datacenters to servers to silicon.
Traditionally throughout the business, executing AI and cloud computing workloads has relied on assigning central processing items (CPUs), graphics processing items (GPUs), and processing energy to every workforce or workload, delivering a CPU and GPU utilization price of round 50% to 60%. This leaves some CPUs and GPUs with underutilized capability, potential capability that would ideally be harnessed for different workloads. To deal with the utilization problem and enhance workload administration, we’ve transitioned Microsoft’s AI coaching workloads right into a single pool managed by a machine studying expertise referred to as Undertaking Forge.
At present in manufacturing throughout Microsoft providers, this software program makes use of AI to just about schedule coaching and inferencing workloads, together with clear checkpointing that saves a snapshot of an utility or mannequin’s present state so it may be paused and restarted at any time. Whether or not working on companion silicon or Microsoft’s customized silicon reminiscent of Maia 100, Undertaking Forge has constantly elevated our effectivity throughout Azure to 80 to 90% utilization at scale.
Safely harvesting unused energy throughout our datacenter fleet
One other means we enhance energy effectivity entails inserting workloads intelligently throughout a datacenter to securely harvest any unused energy. Energy harvesting refers to practices that allow us to maximise the usage of our out there energy. For instance, if a workload is just not consuming the total quantity of energy allotted to it, that extra energy might be borrowed by and even reassigned to different workloads. Since 2019, this work has recovered roughly 800 megawatts (MW) of electrical energy from current datacenters, sufficient to energy roughly 2.8 million miles pushed by an electrical automotive.1
Over the previous 12 months, at the same time as buyer AI workloads have elevated, our price of enchancment in energy financial savings has doubled. We’re persevering with to implement these finest practices throughout our datacenter fleet so as to get well and re-allocate unused energy with out impacting efficiency or reliability.
Driving IT {hardware} effectivity by means of liquid cooling
Along with energy administration of workloads, we’re targeted on lowering the power and water necessities of cooling the chips and the servers that home these chips. With the highly effective processing of recent AI workloads comes elevated warmth technology, and utilizing liquid-cooled servers considerably reduces the electrical energy required for thermal administration versus air-cooled servers. The transition to liquid cooling additionally allows us to get extra efficiency out of our silicon, because the chips run extra effectively inside an optimum temperature vary.
A major engineering problem we confronted in rolling out these options was easy methods to retrofit current datacenters designed for air-cooled servers to accommodate the most recent developments in liquid cooling. With customized options such because the “sidekick,” a element that sits adjoining to a rack of servers and circulates fluid like a automotive radiator, we’re bringing liquid cooling options into current datacenters, lowering the power required for cooling whereas rising rack density. This in flip will increase the compute energy we will generate from every sq. foot inside our datacenters.
Be taught extra and discover assets for cloud and AI effectivity
Keep tuned to be taught extra on this matter, together with how we’re working to convey promising effectivity analysis out of the lab and into industrial operations. You can even learn extra on how we’re advancing sustainability by means of our Sustainable by design weblog collection, beginning with Sustainable by design: Advancing the sustainability of AI and Sustainable by design: Remodeling datacenter water effectivity.
For architects, lead builders, and IT choice makers who wish to be taught extra about cloud and AI effectivity, we suggest exploring the sustainability steerage within the Azure Nicely-Architected Framework. This documentation set aligns to the design rules of the Inexperienced Software program Basis and is designed to assist clients plan for and meet evolving sustainability necessities and rules across the improvement, deployment, and operations of IT capabilities.
1Equivalency assumptions based mostly on estimates that an electrical automotive can journey on common about 3.5 miles per kilowatt hour (kWh) x 1 hour x 800.