Nvidia, which has ridden the generative artificial intelligence boom to new heights, is eyeing what's next in the healthcare space.
The company’s graphic processing unit chips are powering popular generative AI models such as OpenAI’s ChatGPT, as well as those from Microsoft, Google, Amazon and other big tech firms. The partnerships helped propel the company to a $3 trillion market cap in June.
Related: How clinical documentation became an AI battleground
In the last year, Nvidia has signed agreements to power the AI models of digital health companies GE HealthCare, Nuance, Abridge and Hippocratic AI, along with working on drug discovery and development for a host of pharmaceutical companies, including Amgen. It also launched Nvidia Inference Microservices, which it says will enable more sophisticated models.
The company has worked with hundreds of healthcare organizations since the mid-2000s, but these days the sector holds more potential than ever because of generative AI, said Kimberly Powell, Nvidia’s vice president of healthcare. Powell has been with Nvidia since 2008, when it made its earliest forays into industries beyond gaming.
In an interview, Powell spoke about Nvidia's recent digital health partnerships and AI's potential future in healthcare. The interview has been edited for length and clarity.
Why did Nvidia want to work with healthcare companies that make generative AI tools, like Abridge and Hippocratic AI?
We’re a computing platform company. Working with ecosystem partners is the way we’ve operated from Day Zero. We create a core computing platform and in order to do the translation [into industry-specific AI models], we connect with the experts in that industry.
The CEO of Abridge is a clinician himself. He understands the problems [of the sector]. The last mile is the choking point in this industry: If you can't get [AI-enabled] insights back into the systems that the clinicians use on a day-to-day basis, you will not survive, and you will have a very hard time getting to market.
Abridge has an incredible relationship with the electronic health records companies. Not only are [Abridge executives] thought leaders, but they understand the healthcare domain way more than we would because they were doctors living and breathing it. They understand the modern software architecture and that connecting it to the last mile is the way that they're going to create a successful company.
With Hippocratic AI, it’s a similar concept.
What can you tell us about your partnership with GE HealthCare?
The first call I made when I joined Nvidia was to GE HealthCare. We’ve helped it embrace and innovate on accelerated computing and AI. It's building very complex medical devices that take a couple of years to get to market. GE's devices have to live in the market for long periods of time.
[The GE HealthCare team] has taught us a ton about Food and Drug Administration approvals and what it means to be an FDA-regulated device. We've worked on some of its most advanced projects, including AI-guided ultrasound devices.
We’re helping them [develop] their great innovation and making sure that ability to deploy in the last mile is there.
Related: GE HealthCare is all in on AI
How do you see AI evolving in hospitals in the next couple of years?
You can digitize the movement of humans and the operations of hospitals by using computer vision and cameras. There are smart cities. Why aren't there smart hospitals? Every room and every hallway can and should have video, because you can start thinking about building a digital twin of the hospital and creating incredible efficiencies.
I was just in Taiwan recently. They have cameras everywhere and they're actually even deploying robots that can be essentially delivered for the nurses. They can use them to bring clean sheets into a room autonomously. They’re like the autonomous mobile robots that Amazon uses in their warehouses. And they can use those inside the hospital more and more.
We are going to be introducing autonomy into the hospital with smart sensors like cameras and microphones of all kinds. This is what's going to give way to a truly smart hospital. We’re going to need AI to make the hospital smarter because unfortunately there are fewer and fewer healthcare professionals to take care of what needs to happen inside the hospital. Autonomous activity is going to be present.
Some clinicians have been skeptical of AI. How do you manage that kind of fear?
As the old saying goes, you fear what you don't understand. Back in 2017 at the Radiological Society of North America annual meeting, we stood up a Deep Learning Institute [training]. Our famous godfather of AI Geoffrey Hinton had [previously] said, "Within five years deep learning is going to do better than radiologists." Those radiologists obviously had a pretty strong reaction to that.
So what do we do? We set up a whole Deep Learning Institute [presentation] for radiologists to come in and learn how this works. They can even get into the mix of building these AI models because it requires you to be very technically savvy, and obviously they are well-trained doctors who could contribute. That really helped bring them into the fold and reduce some of that fear.
I think education is a huge part of it. There's a way that you can build these systems to show people the reasoning behind these AI models.