Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS EveryONE

Augmented Intelligence in Healthcare: Interview with Arthur Papier

The potential of using machine learning techniques in medicine is immense. As electronic health records have become widely available, there is hope that machine learning will improve diagnosis and care. However, integrating these new methodologies into medical practice is challenging. New methods need to meet healthcare standards, for example around doctor accountability and patient privacy, and must be smoothly integrated into clinical decision-making practices.

We had the pleasure of speaking to Arthur Papier who has been working on problems like these for decades. A dermatologist by training, he started working with electronic health records in the 1980s and launched a clinical decision support tool called VisualDX at the turn of the millennium. VisualDX aids physicians in exploring all diagnostic possibilities through visual clues. The tool combines a search through a database of which symptoms and findings convey which diagnoses with images of how the disease in question looks on skin, eyes, mouth and in radiography.

Arthur Papier will be speaking about his experiences with clinical decision support systems and about the opportunities and challenges of machine learning in healthcare at the Human Intelligence & Artificial Intelligence in Medicine Symposium in Stanford on 17 April. Registration for the event is still open.

PLOS ONE, in collaboration with PLOS Medicine and PLOS Computational Biology, is currently calling for papers in Machine Learning for Health and Biomedicine.

Interview with Arthur Papier

How did you come to work on clinical decision support systems?

I had the great fortune to work with Lawrence Weed, the physician who invented a problem-oriented system to record patient information called SOAP notes, in the 1980s. Dr Weed had realized in the 1960s that patient records were not only illegible but that there was also a lack of organisation which impeded clear thinking. With Dr Weed, I worked on software as a strategy to standardize the way histories and information from patients are gathered. I then went to Rochester, New York, the home of Kodak, which had invented the first digital cameras. This presented an opportunity to combine the ideas of Dr Weed with concepts of visualization of medical information. We started developing prototypes of our clinical decision support systems pre-internet, in the 1990s. In 1999, when the Internet had been born, we started VisualDX as a company and launched the first product in March 2001, right at the dawn of digital information use.

Our core mission is to improve point of care decisions. Our company focusses on the idea that you can’t memorize it all. Rather, you need to develop information systems that standardize knowledge to support physicians.

In view of emerging machine-learning technologies for clinical applications, how do you think clinical decision support systems will improve over the next decade?

Machine learning is incredibly exciting, there is a tsunami of interest worldwide. I think that machine learning is going to keep advancing with more and better data. But the crucial question is what you use those machine-learning methods for. In healthcare there are fundamentals to the way patients are assessed. This process doesn’t follow strict laws like Newtonian physics. Rather, information systems must account for greyness and ambiguity. We think that, while machine learning is going to augment what we do and will improve specific tasks, it will not take over medical thinking and medical problem solving completely.

For VisualDX, we have focussed on using machine learning methods to solve very specific problems, for example that non-dermatologists don’t have the same visual knowledge as dermatologists. This implies that they are unable to describe a rash as well as dermatologists can. We have trained our machine-learning method to augment the ability of general practitioners to describe the features of a rash and identify diagnostic possibilities. The machine learning method doesn’t go right to the diagnosis but rather aids doctors to organise their thinking around possibilities.

Machine learning methods will get better and better, but they need to plug into a way of thinking, an existing structure. In our view, machine learning is about augmenting intelligence rather than artificial intelligence. It’s going to make us sharper and become a window into things that we couldn’t see before but it’s going to be part of a system of care and a system of thinking.

What are the biggest challenges for using machine-learning assisted tools in a clinical setting?

Apart from the technical challenges, there are cultural and legal and societal challenges. How do clinicians and patients interact with the tool, in particular when it comes to legal questions of responsibility? If you have a system with a powerful machine learning technique that goes straight to a diagnosis and the doctor cannot check it, is the manufacturer of the machine then legally liable when a mistake happens? For those sorts of societal questions, we are far from understanding how we’re going to handle them.  There is also the issue of de-skilling people. When people over rely on machines, they lose skills. In our work, we’re trying to use machine learning methods to get a teaching and training effect rather than deskilling people.

How could collaboration between machine learning researchers and clinicians and clinical researchers be improved?

What I’ve learned from working with engineers over many years is that there is a huge difference in thinking between doctors and engineers. Engineers always look for very discrete answers. But a typical thing for a doctor to say is: “I don’t know yet, the answer is going to fall out over time. It’s ambiguous, it’s grey.” If engineers are going to be successful in healthcare, I recommend that they work with physicians to understand how we solve problems and how difficult it can be to make decisions with the limited data we have.

For many machine learning applications in healthcare, including decision support systems, availability of data is very important. What do you think are the most promising approaches for reconciling patient privacy with data sharing?

We must be completely transparent with patients and obtain informed consent about use of their data for research. If you explain to patients how you’re going to use their data, most patients will want to participate to improve care. It’s very important for healthcare systems to engage patients in that conversation. A positive development in this regard are apps, for example an app that Apple is offering, which allow patients to load their medical data directly from the hospital servers into their phone. The tech company is not in the middle and the health information doesn’t go through their servers. The patients then have complete control over who they give permission to use their data. These examples are promising but it takes commitment to privacy. Leading tech companies will have to be committed to privacy to make research possible.

How can we give patients a role in the discussion about the ethics and implications of automated clinical decisions, use of data and artificial intelligence in medicine?

I think that this discussion will need to involve physicians and their patients alike. Academic centres are going to lead the way in involving physicians and in setting standards on how to be transparent with patients. The discussion is going to take time but with the right leadership in the academic centres and the tech companies we’ll get there.

Image Credit: National Cancer Institute

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top