Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Speaking of Medicine and Health

Human Intelligence & Artificial Intelligence in Medicine: A day with the Stanford Presence Center

Last week, PLOS Medicine and PLOS ONE editors Linda Nevin and Meghan Byrne attended Human Intelligence & Artificial Intelligence (HIAI) in Medicine, a Stanford Presence Center symposium. HIAI brought together thought leaders in medicine, computer science and policy to envisage an inclusive, equitable and humane experience in medicine with AI solutions. A few highlights from the symposium are described here.

“Supervised learning is the ultimate example of ‘garbage in, garbage out’,” computer scientist and former Stanford President John L. Hennessy told the audience in his opening remarks at last Tuesday’s Human Intelligence & Artificial Intelligence (HIAI) in Medicine Symposium, hosted by the Stanford Presence Center. Dr. Hennessy was honored at the symposium for his recent Turing Award, but his talk stayed true to the Presence mission—championing human intelligence in medicine as artificial intelligence (AI)’s role in the clinic grows.

The Stanford Presence Center was founded by Stanford’s Vice Chair for the Theory and Practice of Medicine, Abraham Verghese, a leader at Stanford in advocating for and teaching fully engaged physical exams. HIAI commenced the development of a shared agenda to steer AI toward its most beneficial implementation. Can AI be developed to augment the physician’s intelligence, rather than attempt to replace it? Luminaries from medicine, computer science and policy shared their concerns and aspirations.

In the 1960s, Dr. Hennessy told attendees, scientists imagined that artificially intelligent machines would quickly replace humans in the most demanding and influential work. But while classification by AI has grown, general intelligence in AI remains elusive. In plainer words, an AI model can sort images of cats and dogs, but cannot answer a simple question suited to a 5-year-old child: What makes that animal a cat?

Dr. Hennessy explained that AI is coming of age in medicine because we now have fast computers and big data. Eric Topol, Executive VP and Professor of Molecular Medicine at The Scripps Research Institute (TSRI), gave us a tour of the preponderance of genomic, microbiomic, et ceteromic data now available. Despite this, Dr. Topol noted, published trials or prospective observational studies showing the benefit of AI to diagnose and prognose are lacking. In the near- to mid-term, Dr. Topol believes we’ll see the maturation of helper applications: AI-based virtual medical coaches (he uses DayTwo, which provides personalized nutritional guidance to balance blood sugar), AI-guided home monitoring of stable patients, and computer vision in health care settings (tracking care safety and quality; more on this below).

Venture capitalist and former Special Assistant to the President for Health care and Economic Policy Robert Kocher shares this expectation. He explained that health care investors are looking to AI-guided administrative tools like digital assistants for doctors (Robin, for example), and type-2 diabetes coaches (e.g., Virta) for a near-term return on investment. For more ambitious AI applications, including clinical decision support and personalized medicine, the business cases tend to be weak because applications are less likely to increase revenues or margins. Dr. Kocher named these business model challenges, limited investment capital, and a dearth of large, clean, accurate datasets to train the machines as key barriers in AI entrepreneurship.

The limitations of electronic medical records (EMR) were a recurring motif in talks at HIAI, and a quorum of nodding audience members appeared to regret the ways doctors “just lay down and played dead” (in the words of Dr. Topol), allowing confining and cumbersome EMR to invade medicine. In his talk, Lloyd Minor, Dean of the Stanford School of Medicine, posited that EMR frustration may underpin physician burnout. If AI’s entry into the clinic could mitigate the demands of EMR, perhaps physicians could offer their fuller presence.

Fei Fei Li, Chief Scientist of AI/ML at Google, Director of the Stanford Artificial Intelligence Lab, and the first spotlight speaker, began with poll data showing that health care remains a top policy concern among U.S. voters. However, she emphasized, we don’t focus on a key, remediable source of cost, morbidity, and mortality: the physical spaces where health care is delivered. Dr. Li believes that ICUs, EDs, ORs, hospital rooms, and assisted living facilities could be endowed with ambient intelligence—a different “AI,” referring to an environment’s sensitivity and responsiveness to the needs of people. Her hope is that ambient intelligence can save lives by preventing avoidable errors such as patient falls, hygiene associated infections, and retained surgical sponges.

Dr. Li posits three factors that are needed to create ambient intelligence: Transforming health care spaces with sensing ability (thermal and visual), AI-based visual understanding of human activities, and integration of available clinical data. Her talk demonstrated her team’s success in tracking the movements of patients and clinicians in health care spaces, and drew chuckles from the crowd as she demonstrated a visual classifier correctly spotting a hammy actor taking a spill. A published description of her project can be found here.

In Q&A, Dr. Li was asked about her nonprofit organization, AI4ALL, an educational initiative to educate under-represented high school students in AI. She explained her motivation: “There’s a lack of diversity in AI. We know that AI will change the world, the question is who will change AI? If everyone isn’t involved in developing AI, it will be biased.”

The second spotlight speaker, cardiologist Rob Califf, recently the U.S. Commissioner of Food and Drugs, spoke about a question on many minds—how do we regulate AI in medicine? Dr. Califf describes our time as the fourth industrial revolution (after steam, electricity, and information)— in which humans fuse the physical, physiologic, and digital spheres. He feels that AI cannot be regulated by the FDA in the “old fashioned way” wherein FDA scientists study each technical development and provide approvals. First, he said, many of the medical data scientists who understand AI—and its accompanying risks—may continue to work in industry rather than government. Second, it could be impractical for each iteration of code to undergo an approval process; a new approval paradigm may be needed for AI.

The final talk served as a preview for the next symposium, AI in Medicine: Inclusion and Equity (AiMIE), on August 22. Margaret Levi, Director of the Center for Advanced Study in the Behavioral Sciences at Stanford University, spoke about the perniciousness of bias in our training datasets, and, by extension, in AI. As a poignant example, she showed a video from MIT graduate student Joy Buolamwini, who demonstrated a facial analysis algorithm that could not detect her face because of her skin color. For the shaping of AI in health care, Dr. Levi underscored the importance of healing divides by expanding our “communities of fate—” those with whom we perceive our interests are bound and with whom we are willing to act in solidarity.

The symposium, taken as a whole, seeded an optimistic vision in which AI allows the physician to focus on human connection in the clinic. In closing, Assistant Professor of Biomedical Informatics and HIAI moderator Jonathan Chen waxed enthusiastic: “[i]s the computer smarter than the physician? It’s irrelevant. Together they can provide something better than either could alone.”

The recorded talks are available here.

Linda Nevin, PhD is an Associate Editor for PLOS Medicine.

Meghan Byrne, PhD is a Senior Editor for PLOS ONE.

The authors declare no competing interests.

Feature Image Credit: rawpixel, Pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top