Press "Enter" to skip to content

Lenovo and NVIDIA Expand Generative AI Services Partnership

“Almost none of these customers are going to think they’re buying AI” because generative AI capabilities will be tightly bundled with other functions in the future, predicts Lenovo’s Scott Tease.

Image: JHVEPhoto/Adobe Stock

NVIDIA generative AI enhancements and hardware are now available through Lenovo’s AI Professional Services Practice, Lenovo Chairman and CEO Yuanqing Yang and NVIDIA Founder and CEO Jensen Huang announced today during the Lenovo Tech World 2023 keynote in Austin, Texas. TechRepublic reported on this event remotely.

Many of the products resulting from the partnership between NVIDIA and Lenovo are available now. The Lenovo ThinkSystem SR675 V3 server will fall under NVIDIA’s MGX modular reference design framework going forward, and the Lenovo ThinkStation PX workstation, which is on sale today, is available in a bundled package with NVIDIA’s AI Enterprise solution.

More products resulting from the engineering partnership between NVIDIA and Lenovo will be announced at GTC 2024, held March 18 to 21.

“Our goal for the initiative is just to deliver AI so that it’s much easier to consume. People may not be buying AI; they’re going to be buying capability that AI unlocks for them,” said Scott Tease, general manager of Lenovo’s high performance computing and artificial intelligence business, in a phone interview with TechRepublic.

All of the products discussed in this article are available internationally wherever NVIDIA and Lenovo products and services are supported.

Jump to:

The partnership targets organizations looking for fine-tuned generative AI models

The Lenovo AI Professional Services Practice currently offers a variety of generative AI, high-performance computing and hybrid cloud services; the NVIDIA partnership adds the ability to create custom AI models using the NVIDIA AI Foundations cloud service. From there, those custom generative AI models can be run on on-prem Lenovo systems with NVIDIA software and hardware.

“What we’re announcing this week is a new era of partnership that’s going to help drive this new era of what we’re calling hybrid AI,” said Tease. “A lot of our customers are creating lots of data at their desktop or next to their desk or in an edge location … It’s not cost effective to move the data. There could be regulations on data moving across country lines, things like that. There could be latency issues. We want to be able to do a portion of that AI value chain right there locally where the data is being created or being stored.”

The professional services — which include everything from sharing ideas about generative AI deployment to data architecture, model building, proof-of-concept deployment and ongoing management of that deployment — will be enabled by capacity sourced from NVIDIA. Lenovo professional services team members are being trained jointly with NVIDIA to help customers engage in AI activities from Lenovo and NVIDIA.

“(NVIDIA is) the vision behind a lot of the AI goodness that we’re seeing in the world, but we’ve got more feet on the street all around the world calling in end users,” Tease said. “We want to be able to combine forces and be able to tell this story more often. No matter where our customer is on their AI journey, we want to meet them together and help accelerate their journey forward.”

SEE: AWS and IBM Consulting are mobilizing 10,000 consultants to train customers how to use their joint generative AI offerings. (TechRepublic)

Bob Pette, general manager for enterprise platforms at NVIDIA, pointed out that many of NVIDIA’s customers want finely tuned models that contain their own company’s or own department’s data.

“Right now, we’re seeing huge interest in building the foundational models themselves,” Pette said in a phone interview with TechRepublic. “Once we have licensable large language models and we can access this technology, what we’ve basically done is we’ve risen the tide for everybody. What used to be out of reach for a lot of customers is now pretty easily in reach by just taking a large language model, presenting it with your own data, (and) creating a private model to go answer questions that matter to you.”

Lenovo isn’t replacing its focus on cloud with a focus on AI, Pette said, but rather adding foundational model creation for organizations that want to run private generative AI models.

Lenovo workstations and NVIDIA AI Enterprise will be bundled

The Lenovo ThinkSystem SR675 V3 server and ThinkStation PX workstation are both optimized for production AI. Some ThinkStation PX workstations will be bundled with NVIDIA AI Enterprise.

NVIDIA AI Enterprise includes the NVIDIA NeMo framework, which lets customers access NVIDIA AI Foundations to customize enterprise-grade large language models. Retrieval-augmented generation and fine-tuning will help enterprises build generative AI models around their own data.

The ThinkSystem SR675 V3 will come with NVIDIA L40S GPUs, NVIDIA BlueField-3 DPUs and NVIDIA Spectrum-X networking.

In the future, Tease predicted, ” … almost none of these customers are going to think they’re buying AI. We’re going to ingrain (generative AI) capability so deep into the function that they’re seeing that people don’t think about buying AI, they’re just thinking about buying this capability.”

Lenovo expands its use of NVIDIA’s MGX modular design framework

Going forward, the Lenovo ThinkSystem SR675 V3 will move to the MGX modular design, said Pette. The modular reference design is a document made to assist with modular server design, particularly for generative AI workloads.

The MGX reference architecture is ” … really our AI reference architecture for the future, taking into account latency, node interconnects, and all the things that, if we were a manufacturer, we would suggest,” said Pette. “We get with companies like Lenovo. They have their own suggestions. And what we want to get out of this is a long lead time to put everything down on the motherboard … You’re not re-engineering sheet metal and motherboards.”

Source: TechRepublic