It is often pointed out that artificial intelligence has proven its ability to transform the way tasks are carried out within the NHS.
However, in order to fully exploit the value of AI, action needs to be taken by both the organisation and staff. This is what Rachel Dunscombe, CEO of the NHS Digital Academy and Director of Tektology, and Jane Rendall, Chief Executive of Sectra in the UK explained for digitalhealth.net, they are examining to ensure that artificial intelligence is used safely within healthcare.
Several years ago, an NHS trust in the North of England began implementing AI in its practice, requiring hospital staff to take postgraduate courses in data science in order to understand how algorithms work. As is often seen in health care organisations, the trust had not developed a uniform approach to integrating algorithms and applying the necessary supervision to their operation.
As a result, clinicians had to manually perform intensive clinical security checks on the algorithms, which was time-consuming and labour-intensive, severely limiting the organization’s ability to scale up the use of AI.
It is essential to oversee AI
Although AI is distinguished by its ability to provide some degree of automation, it still needs to be supervised. Organisations and health care providers need to be able to monitor its activity, and need sufficient transparency on the operation of an algorithm in order to ensure adequate monitoring and to assess whether and when human intervention is required to improve its performance and ensure safe use.
In order to do this, organisations need to develop a scalable solution. Expecting doctors to do a Master’s degree in data science is simply not sustainable. But the development of a standardized system for managing the life cycle of algorithms could be. While UK organisations such as the NHSX are actively seeking to standardise AI management, this may not be sufficient, an internationally accepted approach would be needed to ensure scalable implementation.
There is a need to address the current lack of international standards on AI implementation. Indeed, if we are to include AI in routine practice, its use must be adapted to meet current needs for improved care while addressing growing gaps in manpower and skills.
The development of such an international standard could help to better inform developers before they begin to produce algorithms, but it will also help to ensure the safe application of these algorithms to specific populations such as health professionals.
Indeed, like the due diligence applied when adopting new medicines, AI should have an internationally applied safety framework. But ideally, without having to wait the years it may take to get important medicines to patients.
But how could an international approach to AI become a reality?
International consensus development must certainly involve rapid progress and communication, but it will also need to be guided by the pooling of insights from a range of sectors beyond health care.
Below are six suggestions that could help create a standardised model that would also support healthcare in accelerating safe implementation:
1. Clinical safety
Firstly, a necessary step to ensure the safe use of AI is to give hospitals the opportunity to review the clinical safety of an algorithm. This can be done by integrating AI into tools already used by organisations for this purpose, such as systems that collect data on the performance of doctors and nurses. The interfaces of AI algorithms should feed into these same systems.
The idea is to monitor AI in the same way that we do for physicians or nurses.
The Royal College of Radiologists has conducted a considerable amount of research on how to help young health professionals advance in their careers. Similarly, creating such feedback on the competency of AI could help to more quickly identify where it has failed or has been misinterpreted, so as to know where improvement is needed.
2. Locating bias
Another step would be to examine demographic data in depth in terms of age, gender, ethnicity and other factors, and determine where bias might exist. It is important for hospitals to know if there are people for whom an algorithm might work differently, or not work as reliably. This could be the case if the patient is a child and not an adult, for example. Ethnicity and a number of other factors may also be important.
If bias is detected, one of two options is available: either practice removing the bias from the algorithm, or create a set of acceptable pathways for people with whom it will not work and continue to use it for groups where there is no bias.
Choosing either of these two options raises practical and ethical questions, such as whether it is appropriate to manually exclude a person from an algorithm if it will not work safely for them, while continuing to use AI for the rest of the population.
In order to make algorithms work safely and answer these important questions, transparency is needed. This means that developers need to inform the organisations implementing their AI about the cohorts used to train the algorithms. This would allow health care providers to choose whether the algorithms match their cohort or whether there is a mismatch that they should be aware of before using AI with patients in certain demographic categories. They would then be able to make more informed decisions, choose to segment a cohort, population or capacity accordingly, or even choose a different algorithm.
3. Systematic demographic validation
Health care systems, including the NHS, usually buy the technology before extending it to other locations. Where these technologies are driven by demographics, scaling up can present difficulties unless demographic validation is carried out correctly.
Indeed, while one local geographic area may have two demographic minorities, another may have a significant mix of ethnic minorities representing almost half the population. In such cases, and in cases where the population changes – for example due to immigration, an extension of the technological service or for any other reason – a new demographic validation against new data sets is necessary to enable AI to function optimally and safely for all.
There are so many demographic groups on the planet that this needs to be done in stages. For an algorithm to be reliable for all, it must be validated each time it encounters new demographics. Indeed, a tool that works safely in the UK may not work safely in Brazil or China. Bias detection allows the validation of the population of origin, which can be significantly different from a population where the technology is expected to develop later. So, for example, if a service in Mersey extended to Manchester, it would have to be re-tested.
4. A “user manual” for AI
As already mentioned, sending physicians to data science courses is not a sustainable option. On the other hand, there is no user manual, step-by-step instructions or list explaining what an algorithm does and how it does it. A standardised approach is needed to help health professionals better understand how AI actually works.
This explanation should include not only clinical terminology, but also common performance measures that can be found in various sectors other than health care. If a technology is to obtain a CE mark certifying its use within the EU and in other countries where the mark applies, the same should be done for all sectors, such an explainer could be common practice in the health, nuclear, aviation and a range of other sectors for example. The EU is already considering how this could be achieved, and it is positive that the discussion on this subject has begun.
5. Clinical audit
A capacity for clinical auditing must become the norm. Just as is already the case for doctors and nurses, it is necessary to be able to document a range of information regarding algorithms. If a case is brought before a coroner’s office or if an unfortunate incident has occurred, it will be necessary to present how an algorithm has contributed to the care.
6. Comparison of track performance over time
Radiology is an area where collaboration between AI and physicians has already become common practice. Indeed, the performance of an algorithm is often compared to that of a human in the field. This does not mean that AI replaces humans in any way. But it can be useful to health care organisations in determining where and how best to use humans in the system. For these fields, which face significant human resource challenges in some countries, it could optimise the process both in terms of time but also in terms of the quality and effectiveness of diagnosis. Indeed, from the patient’s point of view, if algorithms can report faster than a human doctor, the use of AI would also contribute to saving more lives. This could be achieved, for example, by replacing human double reading with AI.
The issue here is to measure track and outcome performance where AI can make a difference. Taking into account AI performance could have a great impact on operations and treatments and could also free up human resources, giving them more time to focus on more complex cases more effectively. By presenting such benefits to patients, it will allow them to gain confidence in algorithms at a time when most remain slightly reluctant to embrace advances in technology as well as demonstrate how AI is being used to improve healthcare.
AI as a solution to healthcare challenges
The value of AI has been recognised in large part by the pandemic and the additional challenges it has brought to the health sector. But it also has the potential to address long-established issues.
A crisis point is imminent unless AI is properly implemented to address these problems, particularly in areas such as radiology, where in some countries demand continues to grow by about 10% per year, while the number of trainees continues to decline.
But the situation is more complex than simply acquiring these algorithms. A standardized approach to algorithm lifecycle management is the key to successful implementation at the required pace.