It might seem like A.I. technology is possible to replace every aspect of healthcare, what with it’s ability to illuminate medical conditions in unconventional ways and forecasting public health crises. However, this assumption is far from the truth. In comparison, the human factor is very much relevant and actually missing from A.I. technology. We can expect many things from these algorithms but there are some we simply cannot. Below there are 7 things we most certainly will not see in the next decade.
1. Replacement of medical professionals with A.I.
As intelligent and capable as they are, A.I. will not be replacing medical professionals. The fear of A.I. taking over is felt across almost all industries and that includes healthcare. While it is true that such software will become integral parts of the healthcare system, they will function as tools to aid human medical experts in perfecting their craft, not by replacing them.
One of the reasons is that competent professionals are paramount in operating and interpreting the analyses of such advanced technologies. In addition, solving multi-layered challenges, requiring creativity, will always be a job for human healthcare workers. However, those medical professionals who do not work with A.I. will most likely be replaced by the ones that do.
2. A.I. providing the same empathy as a real-life doctor-patient relationship
Eventually A.I. will likely be supplied with ”artifical empathy” skills, but that does not mean they will replace real-life doctor patient relationships. On the contrary, compassionate care will be reinforced more than ever before, in the age of A.I.
A.I. tools can manage repetitive and monotonous tasks, freeing up a doctor’s time. This means that they will be able to spend more time with their patients. Treating patients requires more nuanced empathy and compassion, that is only achievable through human touch and doctors will be able to devote more time to that. In fact, what with administrative tasks and diagnostic insights being handled and provided by A.I., physicians will need to sharpen their empathy and communication skills to attend to their patients even better than before.
3. A.I. improving privacy-related issues in healthcare
This expectation completely disregards the way artifical intelligence algorithms actually function. They feed on data and will fail without it. Data often comes from patients and with the Big Tech companies investing heavily in healthcare A.I. they will continue to retrieve more data.
Google, Amazon and Facebook are all involved in the healthcare A.I. race and do not seem to be slowing down. In 2019, Amazon Alexa and the NHS collaborated for the A.I.-based Alexa assistant to offer health advice. Reports, however, showcased how Amazon secretly stored recordings of Alexa users. Also, it was uncovered that the Royal Free NHS Foundation Trust shared large amounts of patient data to Google’s DeepMind A.I. branch in order to develop a new platform. Patients were not even informed that their data was being used for this purpose. A.I. will only make these privacy issues worse.
4. A.I. creating completely autonomous surgical robots
This misconception could be the result of the science-fiction movies we all love so much. In Star Wars: Episode III – Revenge of the Sith we see robots autonomously operating on a heavily burnt and amputated Anakin Skywalker. The robots perform life-saving reconstructive surgery on Anakin, enclosing him in the iconic Darth Vader suit without any human supervision. Various other sci-fi movies have similar scenes of fully-autonomous A.I. surgical robots performing complex operations with no assistance from humans. Most likely, this will all remain within the realm of science-fiction, as robots will be serving as assistants, aids or even as equipment in the OT.
Although robots have the ability to assist in basic tasks such as providing surgical tools or functioning as tools themselves for delicate actions such as the da Vinci surgical system, surgeries in general require fine actions and even creative thinking – due to the differences between individual anatomies. They require human surgeons in order to function smoothly and it is those human surgeons that will be able to handle unforeseen complications as the surgery is ongoing.
5. A.I. making medical decisions on their own
It sounds tempting for A.I. to not only help in clinical-decision making but to also make medical decision all alone. Considering morally challenging decisions, there is a large psychological toll on the medical staff, so A.I. making decisions could help to alleviate that. During the COVID-19 pandemic, due to the overwhelmed facilities, medical professionals had to make the decision of whom to prioritise for life-saving medical resources. Would it take the responsibility from healthcare workers if we left the decision up to some software?
It is not the best course of action to let the software make medical decisions, even if it can mine medical records and genomic data for the best insights. Such decisions require the collective input of ethicists, programmers and medical professionals so there are many complications when it comes to leaving them up to the software. Furthermore, as the next point will prove, A.I. is not immune to bias. A.I. will continue to provide insights to help human professionals take better actions but they will not be making any decisions on their own.
6. A.I. without bias
Artificial intelligence is certainly not immune to bias, so expecting unbiased decisions is not something we should do. An A.I. is trained on a dataset and this is extremely crucial as, as Quartz puts it, healthcare data is ”extremely male and extremely white”. This raises serious concerns when A.I. tools are used to analyze data that is outside this demographic.
Programmers working on A.I. software can, either consciously or unconsciously, influence an A.I. and the datasets fed to algorithms are full of ingrained social injustices. The programmers can unconsciously add values and beliefs about the world into the code or even exclude some parameters that would be more representative of other demographics and populations.
7. A.I. reasoning like humans
A.I.’s do not have consciousness and will not be able to comprehend human reasoning, especially in regards to healthcare. This is why, although smart algorithms will indeed be able to execute tasks and fulfil the roles assigned to them, they will not be able to reason like humans do.
This is proven by evidence from several studies on adversarial attacks on A.I.-based algorithms. Images were tweaked with minimal differences to the human eye and the software misclassified diagnoses which humans would not. Such simple tricks can immensely influence A.I. but not humans, proving the importance of human reasoning that is non-existant in A.I.
The age of A.I. and it’s preparation
As the A.I. field continues to evolve, so will the application of it’s technology in healthcare, and so will the expectations that we simply cannot have from these software.
In order to prepare for the age of A.I. it will become more and more paramount to have a better understanding of the technology and how it ”thinks”. Whether it is learning a programming language or playing games like chess or Go, you will be better prepared to deal with upcoming A.I.-related issues when you better understand A.I.’s language. The language of anticipation.