Friday, November 8, 2024

Google to IBM: How large tech giants are embracing Nvidia’s new {hardware} and software program companies

Be part of Gen AI enterprise leaders in Boston on March 27 for an unique evening of networking, insights, and conversations surrounding knowledge integrity. Request an invitation right here.


Nvidia has gone all in to push the boundaries of computing on the ongoing GTC convention in San Jose.

CEO Jensen Huang, donning a black leather-based jacket, addressed a packed crowd (the occasion appeared extra like a live performance than a convention) in his keynote and introduced the long-awaited GB200 Grace Blackwell Superchip, promising as much as 30 instances efficiency improve for giant language mannequin (LLM) inference workloads. He additionally shared notable developments throughout automotive, robotics, omniverse and healthcare, flooding the web with all issues Nvidia. 

Nevertheless, GTC isn’t full with out {industry} partnerships. Nvidia shared how it’s evolving its work with a number of {industry} giants by taking its newly introduced AI computing infrastructure, software program and companies to its tech stack. Beneath is a rundown of key partnerships.

AWS

Nvidia stated AWS will provide its new Blackwell platform, that includes GB200 NVL72 with 72 Blackwell GPUs and 36 Grace CPUs, on EC2 situations. It will allow prospects to prospects to construct and run real-time inference on multi-trillion parameter LLMs quicker, at a large scale, and a decrease price than previous-generation Nvidia GPUs. The businesses additionally introduced they’re bringing 20,736 GB200 superchips to Undertaking Ceiba – an AI supercomputer constructed solely on AWS – and teaming as much as combine Amazon SageMaker integration with Nvidia NIM inference microservices.

VB Occasion

The AI Influence Tour – Atlanta

Persevering with our tour, we’re headed to Atlanta for the AI Influence Tour cease on April tenth. This unique, invite-only occasion, in partnership with Microsoft, will function discussions on how generative AI is reworking the safety workforce. Area is restricted, so request an invitation as we speak.


Request an invitation

Google Cloud

Like Amazon, Google additionally introduced it’s bringing Nvidia’s Grace Blackwell platform and NIM microservices to its cloud infrastructure. The corporate additional stated it’s including help for JAX, a Python-native framework for high-performance LLM coaching, on Nvidia H100 GPUs and making it simpler to deploy the Nvidia NeMo framework throughout its platform through Google Kubernetes Engine (GKE) and Google Cloud HPC toolkit. 

Moreover, Vertex AI will now help Google Cloud A3 VMs powered by NVIDIA H100 GPUs and G2 VMs powered by NVIDIA L4 Tensor Core GPUs.

Microsoft

Microsoft additionally confirmed the plan so as to add NIM microservices and Grace Blackwell to Azure. Nevertheless, the partnership for the superchip additionally contains Nvidia’s new Quantum-X800 InfiniBand networking platform. The Satya Nadella-led firm additionally introduced the native integration of DGX Cloud with Microsoft Cloth to streamline customized AI mannequin improvement and the supply of newly launched Omniverse Cloud APIs on the Azure Energy platform. 

Within the healthcare area, Microsoft stated Azure will use Nvidia’s Clara suite of microservices and DGX Cloud to assist healthcare suppliers, pharmaceutical and biotechnology firms and medical gadget builders shortly innovate throughout medical analysis and care supply.

Oracle

Oracle stated it plans to leverage the Grace Blackwell computing platform throughout OCI Supercluster and OCI Compute situations, with the latter adopting each Nvidia GB200 superchip and B200 Tensor Core GPU. It can additionally come on the Nvidia DGX Cloud on OCI. 

Past this, Oracle stated Nvidia NIM and CUDA-X microservices, together with the NeMo Retriever for RAG inference deployments, may even assist OCI prospects carry extra perception and accuracy to their generative AI purposes.

SAP

SAP is working with Nvidia to combine generative AI into its cloud options, together with the most recent model of SAP Datasphere, SAP Enterprise Expertise Platform and RISE with SAP. The corporate additionally stated it plans to construct further generative AI capabilities inside SAP BTP utilizing Nvidia’s generative AI foundry service, that includes DGX Cloud AI supercomputing, Nvidia AI Enterprise software program and NVIDIA AI Basis fashions. 

IBM

To assist purchasers remedy complicated enterprise challenges, IBM Consulting plans to mix its know-how and {industry} experience with Nvidia’s AI Enterprise software program stack, together with the brand new NIM microservices and Omniverse applied sciences. IBM says it will speed up prospects’ AI workflows, improve use case-to-model optimization and develop enterprise and industry-specific AI use circumstances. The corporate is already constructing and delivering digital twin purposes for provide chain and manufacturing utilizing Isaac Sim and Omniverse.

Snowflake

Information cloud firm Snowflake expanded its previously-announced partnership with Nvidia to combine with NeMo Retriever. The generative AI microservice connects customized LLMs to enterprise knowledge and can permit the corporate’s prospects to reinforce the efficiency and scalability of the chatbot purposes constructed with Snowflake Cortex. The collaboration additionally contains Nvidia TensorRT software program that delivers low latency and excessive throughput for deep studying inference purposes.

Aside from Snowflake, knowledge platform suppliers Field, Dataloop, Cloudera, Cohesity, Datastax, and NetApp additionally introduced they plan to make use of Nvidia microservices, together with the all-new NIM know-how, to assist prospects optimize RAG pipelines and combine their proprietary knowledge into generative AI purposes. 

Nvidia GTC 2024 runs from March 18 to March 21 in San Jose and on-line.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles