Nvidia launches a set of microservices for optimized inferencing

At its GTC conference, Nvidia today announced Nvidia NIM, a new software platform designed to streamline the deployment of custom and pre-trained AI models into production environments. NIM takes the software work Nvidia has done around inferencing and optimizing models and makes it easily accessible by combining a given model with an optimized inferencing engine and then packing this into a container, making that accessible as a microservice.
Typically, it would take developers weeks — if not months — to ship similar containers, Nvidia argues — and that is if the company even has any in-house AI talent. With NIM, Nvidia clearly aims …