Explainable AI with Deep Learning in healthcare is complicated.

Building trust with doctors and patients requires proving accuracy is much, much higher than humans. We trust humans much more than we do AI Systems. Autonomous cars are much safer but one accident and it’s all over the news!

So how do you build an AI powered skin cancer identification system that’s more accurate and more trusted than humans? And with deep learning and neural networks playing a significant role, how do we maintain explainability throughout the process?

We chat with Yaniv Gal from Kahu and Molemap to learn about the challenges they face building such a system to be used by doctors around the world.

PS. This conversations gets quite technical, but is super interesting…

Justin Flitter

Founder of NewZealand.AI.

http://unrivaled.co.nz
Next
Next

Using AI to unleash Awesome! with Asa Cox.