The introduction of machine learning programmes in the health sector has drastically improved the process of diagnosis. With algorithms trained with patients’ reports finding what doctors may often overlook. —especially in Black patients.

Research regarding the use of AI for the optimisation of health care develops algorithms through a form of automatise med school. By collecting data from millions of X-rays and other medical information provided by health care experts, the software reaches a point where it can accurately signal suspect moles or lungs showing signs of Covid-19 without the need for human contribution.

Earlier this month, a new study introduced a novel approach to the training of algorithms. Training algorithms to read knee X-rays relevant to the diagnosis of arthritis, with patients reports instead of doctors scores, and the results were astonishing. Indeed, the findings revealed that radiologists may literally have a blind spot when it comes to reading Black patients’ X-rays.

Especially when accounting for the pain Black patients may experience, the trained algorithms proved more accurate, apparently by discovering patterns of disease in the X-rays that humans naturally seem to overlook.

Said Ibrahim, a professor at Weill Cornell Medicine, in New York invited radiologists and other doctors to perhaps re-evaluate their current strategies, following the findings of the study.

His background in health inequality research only came to reinforce the pinpointing of the issue that is inequitable health care which society still faces to this day. He is confident that algorithms created to reveal what doctors don’t see, instead of merely mimicking their knowledge, could make health care more equitable by reducing disparities among people who undergo arthritis surgery. With Black patients being around 40 percent less likely than others to receive a knee replacement, even though they are at least as likely to suffer osteoarthritis, he suggests differences in income and insurance likely play a part, but so could differences in diagnosis.

Ziad Obermeyer, an author of the study and a professor at UC BERKELEY SCHOOL OF PUBLIC HEALTH seems to agree with Ibrahim in regard to the optimisation potential of AI within the health sector. “The algorithm was seeing things over and above what the radiologists were seeing.” He was inspired to use AI to probe what radiologists weren’t seeing by a medical puzzle. 

Data extracted from a long-term National Institutes of Health study on knee osteoarthritis showed that Black patients and people with lower incomes reported more pain than other patients even though they had X-rays which radiologists scored as similar. These differences might be the result of physical factors unknown to knee experts, or perhaps psychological and social differences however the causes still remain untraceable by human doctors.

In an attempt to contribute to the tackling of such disparity, researchers from Stanford, Harvard, and the University of Chicago created computer vision software using the NIH data so as to better understand what human doctors might be missing. Their algorithms were programmed to predict a patient’s pain level from an X-ray. Tens of thousands of images introduced to the machine learning programme, resulted in the discovery of patterns of pixels that correlate with pain.

How exactly did this discovery come about?

The software was programmed to use these patterns when confronted by an X-ray it hasn’t seen before in order to predict the pain a patient would report experiencing.

Those predictions proved to be more accurate in regard to the patients’ pain than the scores radiologists assigned to knee X-rays, particularly so for Black patients. .

This study highlighted the fact that AI has the ability to learn to detect signs of pain that were previously overlooked by radiologists. “The algorithm saw things beyond what radiologists saw, things that are more often causes of pain in Black patients,” said lead researcher Obermeyer.

One theory that could explain why radiologists are not as proficient at assessing knee pain in Black patients could be that the standard classification used today was developed with a much less diverse population than that of the modern United States. Indeed, this theory has its origins in a small study carried out in 1957 in an industrial town in northern England and did not take sufficient account of today’s multicultural society. At that time, diagnoses were made through a one-size-fts-all process, with doctors relying primarily on what they saw and on observations such as cartilage shrinkage to assess the severity of osteoarthritis. 

Since then, X-ray equipment, lifestyles and many other factors have evolved, and it is not surprising for Obermeyer that this method cannot content with the diversity doctors see at the clinic today.

The study does not only show AI results formed by patient feedback rather than expert opinion, but it also shows that medical algorithms should perhaps begin to be seen as a cure rather than a cause of bias.

In fact, in 2019, researchers, including Obermeyer, demonstrated that an algorithm guiding care for millions of U.S. patients prioritized white patients over Black patients in assisting diagnosis and treatment of complex conditions such as diabetes.

While Obermeyer’s new study is showing how algorithms can correct disparity in diagnosis how it does it and what it sees is still imperceptible to the human eye. Neither he nor the algorithm can explain what the algorithm sees in X-rays, what doctors don’t see. 

Artificial neural networks, a technology that has greatly improved a range of AI applications, have been used to create these algorithms, but the technology is not yet fully mastered by experts. Indeed, it is so complicated to reverse engineer them that researchers call them “black boxes”.

By gathering a larger and more diverse collection of X-rays and other medical data to test the algorithm’s performance, Judy Gichoya, a radiologist and assistant professor at Emory University, aims to understand what knee algorithms know mainly through the work and ingenuity of humans. 

Using detailed X-ray notes made by radiologists and comparing them to the results of the pain prediction algorithm, Gichoya’s goal is to discover clues about what it really detects. She is convinced that this is not beyond human capabilities, saying “It may be something we see, but in the wrong way”.

Total
0
Share