AI is potentially one of the most powerful agents for positive change we have. Fairly or not, technology is often seen as a dehumanising force in healthcare, but AI has the potential to automate many administrative tasks, freeing up care givers and enabling them to focus on providing human-to-human care. But is AI invoked too often to solve problems that could be solved by less complicated means?
This year’s CES product launches all seemed to be AI-powered. ‘It’s got AI in it, so you know it’s cutting edge.’ The technology is now becoming ubiquitous, deployed in a wide range of products from smart vacuum cleaners to doorbells.
This raises a question. Should we really be using this technology so much? Not because it could become sentient and turn on us, but because it brings with it an unprecedented level of complexity that many organisations are not equipped to deal with. From data management and security to the ‘black box effect’, not to mention uncertainty and a growing malaise of mistrust in the eyes of the consumer.
As a result,
‘Brand AI’ could find itself in a delicate spot in the near future.
This could hinder the application of machine learning to solving more important problem areas. So how should we think about the application of AI to healthcare?
In the ‘Trust and impact of AI on Health Care’ panel discussion, Jesse Ehrenfeld– Chair, Board of Trustees, American Medical Association — called it when he said:
“At the centre of the interface between human and machine is trust.”
AI and trust do not go hand in hand, and yet their destinies are inextricably linked. Without trust, AI and its ability to bring positive change will be slower than we need… and we need it fast! As Vivian Lee of Google’s Verily highlighted in the ‘Rules of Contagion’ session, “we need new ways to drive efficiencies or America can’t afford its own healthcare.” The same goes for the rest of the world.
So how do we tackle the issue of Trust vs. AI in the interest of positive change?
Also in the ‘Trust and impact of AI on Health Care’ panel, Pat Bird — Sr Regulatory Specialist at Phillips — offered up three useful principles for thinking about trust and AI:
1. Tech trust — Does the application justify the use of AI?
2. Regulatory trust — Are the right rules in place to prevent misuse and damage?
3. Human Interaction trust — If you have a bad UI / UX people won’t engage or trust it.
Bird’s first principle is self-explanatory but powerful. Does the problem to be solved justify the use of the technology? (A mantra that should be applied everywhere.) Can you (the organisation) manage the technology and data in a sensible and safe way? Does it deliver true value that justifies the use? Is the technology reliable and provably so?
When it comes to the critical issue of regulatory trust, Bird explained that although this may seem like a new frontier, it is not a blank page. We already have some models and processes from other industries that would enable good practice and instil greater trust.
He used the example of good practice when taking tissue samples and applied it to the world of data capture;
Find good tissue (data) samples
Store them safely
Dispose of them before they are out of date
Test before use
Once used dispose of them safely
Bird’s third tier of trust echoes user-centred design principles and calls for more empathetic design of digital solutions. So, how exactly do you design product UX and UI to engender user trust?
Here are some principles I use routinely:
Transparency is key
Regular and clear communication in terminology that your intended audience will understand is crucial. Provide explanation of how the user’s data will be processed, its storage and its intended use.
If the intent is opaque and the solution is uncanny or in some cases creepy, trust quickly vanishes.
Data flow is a transaction
If you are capturing patient data there is a growing expectation for you to deliver explicit value as a result. Patients / consumers are increasingly aware of the value of their data and expect a return. Users give up some of their privacy in return for services, much like a financial transaction.
Beware of the black box syndrome — A problem, unique to AI, arises when we don’t know how the AI came to the solution, because we (humans) can’t actually think in as many dimensions as a machine. A solution to trusting the black box and even the ethics of using such as system are still up for debate. I would revert to Bird’s first principle here and ask if the ends justify the means.
Context is vital — This is true both in terms of justifying the use of AI and when gathering and using the data that powers it. Data is abstract, but must always be gathered and interpreted with the real-world context in mind. Is the data representative? Could it be biased or compromised? Does it have a shelf-life? Do we really need to collect it? Has it been explicitly given?
When it comes to healthcare, “context” should include all stakeholders, both on the clinical and the patient side. As Jesse Ehrenfeld said: “Having clinical input into the creation of these new tools is vital.”
Perhaps more pragmatically, as ‘Trust and Impact of AI on Health Care’ panellist Christina Silcox from the Duke Margolis Centre for Health Policy put it,
“build AI that deserves trust.”
Thank you to Matt Millington for his thoughts.
Read the series:
Interoperability and the quantified environment.
Patient centricity is everything — or is it?
To find out more about TTP visit ttp.com