Do data and artificial intelligence (AI) have the power to change the world? In the healthcare space, they are revolutionising how disease is detected, diagnosed and treated, as well as how we care for ourselves and others.
AI is helping to detect and diagnose strokes, cancer and even difficult-to-diagnose diseases like fibromyalgia, a condition in which there is no apparent medical reason for the pain and fatigue felt by sufferers. A diagnosis of fibromyalgia used to take, on average, seven years and appointments with ten different specialists. But last year researchers used AI to differentiate the brain scans of those with fibromyalgia from those not suffering from the disease—within minutes and with 93% accuracy. Beyond diagnosis, using AI to decode the brain signature for fibromyalgia could help doctors to understand the disease better and figure out which treatments will work for which patients.
Data and AI are also empowering individuals to take charge of their own health. Connected wearable devices like Fitbit and Apple Watch, in conjunction with a range of health-focused apps, can monitor our heart rate, blood pressure, blood sugar levels and more, can track our eating and sleeping patterns, and can help us manage conditions ranging from depression to diabetes. Genetic testing services use algorithms to decode our DNA to uncover any predispositions to certain diseases, helping us to better understand and shape our future health.
But the use of these technologies also raises fundamental questions about data ownership and privacy, and whether we are happy turning decision-making over to the machines.
Knowledge is power
Health insurance providers are using the data collected by wearables – as well as publicly available and purchasable information like shopping records and social media profiles – to gain a better understanding of our lifestyles and encourage us to be healthier. Some are nudging us with the promise of discounts and other incentives if we give them access to this information. Similarly, information from genetic testing about our predispositions gives us the chance to modify our behaviour and work with our doctors with the aim of preventing disease.
This is presented as a win-win situation: people are healthier, and insurers pay out less in medical expenses. But what if the information collected is used not to reward but to penalise, or to discriminate and deny treatment or coverage?
What if you tell your doctor and insurer that you have given up smoking and joined a gym, but then they see from your shopping records that you’re still buying cigarettes (or you’re spotted in a picture on Facebook with a cigarette in hand) and they can tell from your Fitbit that you are not using that gym membership? Will your premiums go up? Will your insurer refuse to cover your treatment for heart disease because you knew you had a genetic predisposition but continued to smoke and avoid exercising?
“There is a definite increase in data-driven product activity in the healthcare sector,” says Alastair Moore, Head of Analytics and Machine Learning at Mishcon de Reya. “This ranges from Babylon Health’s ‘GP at Hand’ app, used for video appointments, to John Handcock, one of the largest North American life insurers and partner of Vitality in the UK, announcing that it will only sell ‘interactive’ policies that track fitness through the use of personal devices. There are certainly huge benefits to be realised in increasing access to and quality of healthcare, but there are also many potential risks. Data inaccuracies, intentional gaming of health systems, bias and data breaches would, for example, have significant negative consequences for patients.”
Who’s in charge?
Determining how such information is utilised used to be the domain of humans. But as we increasingly turn over the collecting, filtering and analysing of data to AI, are we ceding control to the machines?
Whether it is deciding which patient receives a heart transplant or whether insurance should cover the cost of an expensive experimental treatment, the algorithms making the decisions have been coded by humans with human values. But because we lack a clear, agreed collective value system, what we are teaching the machines can also be unclear – and sometimes leading to unexpected consequences.
Big data and AI have the potential to revolutionise healthcare, but we must think carefully about the consequences for society. Ethical considerations need to keep up with technology.