A healthy image

Artificial intelligence (AI) software can analyze medical images, and the practice will  become increasingly common, helping radiologists detect anomalies, diagnose disease, monitor cancer, and offer better prognoses.

Artificial intelligence (AI) software can analyze medical images, and the practice will become increasingly common, helping radiologists detect anomalies, diagnose disease, monitor cancer, and offer better prognoses.

Credit: Getty

In 5 seconds

Canada's radiologists are looking to artificial intelligence software to better analyze the data they collect. Do patients know what they're getting into?

An Tang

Credit: Caroline Campeau

Artificial intelligence (AI) software can analyze medical images, and the practice will  become increasingly common, helping radiologists detect anomalies, diagnose disease, monitor cancer, and offer better prognoses.

But to work efficiently, AI algorithms need vast amounts of data, and seeking patients' consent to collect and use the information to improve healthcare can put a crimp in data collection. What are the challenges involved, and what are the risks to people's privacy?

This week, the Canadian Association of Radiologists (CAR) issued a white paper on the ethical and legal issues raised by the use of AI in their profession. Université de Montréal's associate professor of radiology An Tang chairs the CAR's AI working group.

What are the privacy risks of using AI in radiology?

The main risk is the unapproved, unauthorized secondary use of data that have initially been collected for clinical care. When the data are later analyzed by researchers for retrospective study, there's a risk, if the data has not been completely anonymized, that patients can be identified by subtle visual cues or by metadata contained in their file.

So what is the CAR proposing to do about it?

We are saying there needs to be a framework for proper data security and anonymization and that the ethical issues around patient confidentiality need to be discussed. Right now, either patients provide explicit consent for a specific research project or institutional review boards waive consent for secondary use of data that have already been collected. We need to find a compromise between giving consent after the fact, which is impractical on large scale, and waiving consent, which can lead to excessive disclosure. From our perspective, we anticipate that a transition from the current situation of "opt-in consent" to "opt-out", also known as "broad" or "presumed" consent, will facilitate the development of AI applications in healthcare.

Why is opting-in a problem? Is it because patients don't really know what they are signing up for?

In fact, they know what they are signing up for but the needs of research may evolve over time. A patient may give his consent when he's in the hospital, but years down the road his data could be used for other, unforeseen research projects. Though the aim of these projects would be to improve healthcare for other patients, and no harm would come to the patient  as long as his medical data has been properly de-identified or anonymized, the point is the patient needs to be properly informed. Not only that, he needs to be offered the opportunity to withdraw his data should he wish.

Are there commercial issues involved, too?

Definitely. In our health-care system, data are collected for the public good and paid for through people's taxes. But what if the use of that data for research winds up having a commercial benefit by leading to better AI software? Would patients and taxpayers and institutions get a slice of the pie? Would they get a licence to use the software? If a tech company made them pay for it, what would be a fair price given the the decades of data collection, storage and curation paid for from the public purse?

What's the next step for this technology?

For AI software to be successful in radiology, the data or the working models derived from them must cross institutional and provincial boundaries. Prediction of disease works best when you can train with data from very large cohorts of patients, and we'll have to set up the proper infrastructure to validate it all. Since the early 2000s, the image file formats used by hospitals and clinics have been, in theory at least, compatible across Canada. Since medical data is unlikely to be shared between provinces, at least in the short term, strategies for learning across institutions must be developed. It’s a question of putting the infrastructure in place to make things possible.

About the CAR, the white paper and Dr. An Tang

"Canadian Association of Radiologists white paper on ethical issues related to artificial intelligence in radiology," by An Tang and 15 others in the CAR AI Working Group, was published April 7, 2019, in the Canadian Association of Radiologists Journal. Dr. Tang is a clinical associate professor of radiology in Université de Montréal's department of radiology, radio-oncology and nuclear medicine; a researcher at the Centre de recherche du Centre hospitalier de l'Université de Montréal (CRCHUM); and chair of CAR's AI Working Group.

Media contact