The NHS, DeepMind and a case of mystified priorities...


Next year the National Health Service— the public healthcare service in the UK— will celebrate its 70th birthday, but will it be the version of the NHS that its creators imagined it to be? The NHS is being pulled in all directions with the government pulling at one side (with its lack of spending) and Big Pharma pulling at the other side (attempting to take the NHS to court over the use of cheaper drugs). So how much longer can the NHS take all of this? With austerity pressurising the NHS into reducing its spending costs on medications, the NHS is turning to new, innovative and cheaper therapeutics along with methods for monitoring their patients. Albeit taking a controversial route with their deal with Google’s AI (artificial intelligence) company— DeepMindDoes the NHS think AI is the answer to all of their problems? Most likely, yes, because the NHS have placed a mountain of trust in DeepMind, giving them access to a shocking 1.6 million patient’s health data, without their consent. 

Cheaper drugs vs. Big Pharma

By 2020 the NHS is expected to save £22 billion, with the government agreeing to add £8 billion to NHS spending. To do this they will stop spending money on expensive drugs from pharmaceutical companies, favouring cheaper options; to the dismay of the pharmaceutical industry that have tried and failed to take the NHS to court over this decision. This tactic could be seen as a way of pressurizing government into increasing funding to the NHS so that in turn, the NHS will favour and spend money on drugs from pharmaceutical companies.  However, this is a means for the NHS protecting itself from unnecessary overspending. In theory, it’s a good idea but in reality you are putting patients’ lives at risk by preventing them from receiving the treatment that they need. For example, the NHS has rejected 1/3 of new cancer therapeutics due to “cost-efficiency concerns” and they have also had to “ration a new groundbreaking cure for hepatitis C” for only the patients with the harshest of cases (~200,000 people) leaving the rest of the hepatitis-carrying community suffering. 

In an attempt to save money, the NHS “will try to cut £300m from its annual pharmaceuticals bill by using novel and cheaper alternatives to the most expensive drugs.” Why? The NHS is aiming to become more 'innovative' by switching to cheaper alternatives in an attempt to reduce expenditure. Six out of ten of the most expensive drugs are biological medicines; therefore, the NHS intends to switch to biosimilars, which are effectively similar and much cheaper adaptations of current biological treatments. These medications have been licensed for conditions such as cancer and “a wide range of inflammatory and autoimmune diseases”. 

Where was “innovation” at the time of the WannaCry attack?

Earlier this year a dangerous and manipulative cyber-virus caught hold of the NHS computers. However, this could have easily been prevented had the NHS computers been patched and updated to the latest cyber-security software and taken the right precautionary measure when they were first recommended to them. Not only was it essential to evaluate risk but they also had to take precautionary measures when they were recommended.  Why were these not taken? Aren’t these computers the computers that store all of our medical information?  Given that we’re looking for artificial intelligence to make the NHS better; shouldn’t the NHS focus on making their current technologies up to date before jumping to artificial intelligence?

Artificial intelligence is the solution...

It’s possible that most of us think that it’s still too early to worry about the growth of artificial intelligence whereas some of us are already worried about what this means for our future. But it’s not too early, in fact, artificial intelligence is already taking a part in our lives whether we have adapted to it or not (i.e. Tesla’s self-driving cars).  It seems that AI will also be taking a part in our healthcare after a deal between the NHS and DeepMind had publicly stated that they were working on an app called “Streams to help hospital staff monitor patients with kidney disease”, emerged earlier this year.  You would assume that this means they only shared data of patients with kidney disease. Nope. They shared 1.6 million NHS/Royal Free patient’s data with DeepMind with the intention of using patient data to compare one patient's data with other patients' data in order to be able to detect early onset of disease or even conditions such as bipolar disorder. I really like the idea of what DeepMind is working on, but not the fact that Google has had access to 1.6 million patient’s data.

How can we trust Google with all of this data?

This is a lot of data to handle and also highly confidential too. Ross Anderson (University of Cambridge) reassured New Scientist readers earlier this year that we don’t need to worry about “Google breaching patient privacy or misusing the data” because Google has a “good track record of keeping data secure and private”.  “The information sharing agreement ... is not simply a data sharing agreement, but is a legally binding contract that includes clear commitments required for compliance with the Data Protection Act and was prepared specifically in line with relevant ICO guidance,”  Royal free said in a statement written in New Scientist.

Still, clear consent should have been given from individual patients before their data was shared with DeepMind; even if it delayed the research.  HSCIC (Health and Social Care Information System) stated in their FAQ’s that consent is essential for the sharing of data and must be obtained before further actions are taken. Therefore under the Data Protection Act, all three partners (NHS, Royal Free and DeepMind) have the responsibility to comply with the act, protect patient data and have the patient’s best interests in mind.

DeepMind’s ethical team...

As well as providing innovative science and technology, scientists have a responsibility towards society to understand the social and ethical consequences of their work. Heather Douglas affirms this in The moral responsibilities of scientists (2003): “Someone must be responsible for thinking about the potential consequences at these decision points”. Considering this, DeepMind finally launched the DeepMind Ethics and Society Research Unit, early October 2017. This unit should help them self-govern their research in AI and healthcare and the vast amounts of data they hold.  The research unit also aims to connect society with new AI technologies and as a way of reassuring the public that the public benefit is their priority. We hope...

Where do we go from here?



This is a difficult question because there are still so many uncertainties and unanswered questions. The NHS has a long way to go before it reaches a peak of innovation, which it most likely will not reach if spending cuts continue. Furthermore, the NHS will need to ensure that they have a smooth transition from the use of old technologies to new technologies developed with AI— taking into account replacement of technologies and training of staff— all whilst remaining cost-efficient. But with current austerity measures, I presume this will be an incredibly arduous process.  

Comments