Palo Alto-based Hippocratic AI on Monday announced a collaboration with NVIDIA to develop “empathetic” AI healthcare agents that are aimed at replacing nurses and other healthcare workers. Hippocratic will use NVIDIA’s AI platform to power their agents. Low-latency voice interactions are important for patients to build an emotional connection naturally, according to Hippocratic. The company’s healthcare agents are built on its large language models designed specifically for healthcare. The company claims that health systems, digital health companies and pharma use its healthcare agents to “augment their human staff” and complete low-risk non-diagnostic, patient-facing tasks over the phone. Hippocratic says there is a global shortfall of 15 million healthcare workers and it proposes that LLMs are the only “scalable way” to close this gap. Its website makes a comparison between the $90 an hour that healthcare organisations will seemingly pay a nurse and the $9 per hour you will have to pay Hippocratic for its AI agents. “With generative AI, patient interactions can be seamless, personalized, and conversational—but in order to have the desired impact, the speed of inference has to be incredibly fast. With the latest advances in LLM inference, speech synthesis and voice recognition software, NVIDIA’s technology stack is critical to achieving this speed and fluidity. We’re working with NVIDIA to continue refining our technology and amplify the impact of our work of mitigating staffing shortages while enhancing access, equity, and patient outcomes,” said Munjal Shah, co-founder and CEO of Hippocratic AI, in a press statement. Hippocratic will work with Nvidia to develop a really low-latency inference platform for real-time use cases. The company will build on Nvidia’s low-latency inference stack. It will use Nvidia Riva models for automatic speech recognition and text-to-speech translation. It will also customise the models for the medical domain. Along with that, Hippocratic will ue Nvidia NIM microservices for deploying the AI models.