Nvidia Deploys Human AI Experts for Customer Service on AI

As more companies turn to AI for customer support, Nvidia is taking a traditional route to help its customers deploy AI networks.
The company is making “AI Experts” available via phone for customer support and troubleshooting for those having problems with its AI Enterprise software suite.
Skilled Engineers
Bringing individuals skilled in AI is a step up in customer service beyond just the regular tech support roles for generic hardware or software issues. Nvidia is hiring engineers specifically skilled in AI for the job, with the goal to help companies build AI tools such as chatbots to provide their own customer service platforms.
The customer support representatives are engineers hired by Nvidia, and part of an organization called NVEX (or Nvidia Enterprise Experience), said Bob Crovella, an Nvidia solutions architect, said at a breakout session at the HPE Discover trade show held last week in Las Vegas.
“The skill types tend to be organized more towards the infrastructure than they are towards ‘How do I use TensorFlow?’ We’re not really expecting to answer the question of how to use TensorFlow. Because we expect if your developers… already know how to use TensorFlow,” Crovella said.
The customer service is geared toward helping advise on configurations and scripts to smoothen out deployment, Crovella said. The company offers customers specialized training resources around tools like TensorFlow and Pytorch.
The support is also provided around a command-line utility called System Management Interface, which is built to manage and monitor GPU devices related to a deployment. The customer support is handed off to other units or companies if there are other problems, such as virtualization or hardware issues.
Expanding Beyond the GPU
Nvidia aims to expand beyond its heritage of a GPU maker to also offer software and services to help companies in AI. AI is still emerging as a computing model with companies struggling to find the right hardware and software combination.
Nvidia’s customer service organization historically has focused mostly around driver support, and enterprise software is an emerging revenue generator. The company sees a $1 trillion market opportunity in a software market that includes underlying code in autonomous cars and robots. Nvidia is putting its software stack in cars with its chips from Mercedes Benz and Volvo.
Nvidia is also realizing the struggles customers face in deploying AI, which is where the customer support role fits.
AI Enterprise 2.0
The customer service is offered as part of AI Enterprise 2.0 offerings, which the company announced earlier this year. The software includes data science tools and frameworks such as RAPIDS, which includes software libraries based on its CUDA programming framework for the execution of AI workloads on its GPUs. It also includes frameworks such as Pytorch and TensorFlow, operators for cloud deployments and virtualized GPU offerings.
The company last month introduced the TAO toolkit, which is a starter kit for companies looking to deploy AI for common workloads like natural language processing. It only takes a few lines of coding in a Jupyter notebook to deploy an AI model, and users don’t need to know Pytorch or Tensorflow.
HPE Connection
At last week’s HPE Discover conference, Nvidia announced its AI Enterprise suite would work on HPE’s GreenLake, which is a customized cloud service for customers. HPE includes the hardware, software and services, and is partnering with software providers to tune cloud offerings to customers.
An Nvidia-based AI training deployment in HPE GreenLake costs about $6,000 per node, per month, and includes AI Enterprise, VMWare’s vSphere on two Nvidia A100 GPUs and 768GB of RAM. that includes training and setup. An inference package costs $5,200 per month, with similar software offerings and an A30 GPU and 384GB of RAM.
Nvidia’s hardware and software offerings are already available on major public cloud providers.