Health and Wellness

Why plans to use AI to track your phone habits could see people given drugs for mental health disorders they DON’T have

Late one evening in December 2018, I received the worst phone call of my life. It was my father, calling to tell me that my younger sister, Paige, was dead. She had taken her own life less than two weeks after her 22nd birthday.

For most of her short life, Paige had struggled with the emotional consequences of serious childhood trauma from the merciless bullying that she’d suffered at school.

Despite receiving the best psychiatric treatments available, she was never able to find relief.

In the aftermath of her death, as I sorted through her effects, I found a drawing she made shortly before she died. It features a crudely drawn person with a stack of boxes balanced precariously above their morose face.

The boxes are labelled with things that Paige’s doctors had said were contributing to her distress: abandonment; eating disorders; obsessive compulsive disorder; social anxiety; lack of friends; sex issues and learning issues.

This drawing was my sister’s attempt to explain how she was feeling to her psychiatrist. Beneath the figure, she had scrawled a single phrase: ‘No one knows what this is.’

Today, I interpret my sister’s words as a challenge to all psychiatrists to do better at diagnosing and treating mental disorders.

My sister knew that her psychiatrists didn’t really understand the ‘disease’ they were trying to treat.

Digital phenotyping creates an impression of your mental health by collecting data on your heart rate, gait, scrolling patterns and other subconscious behaviours

It seemed to her that they were throwing solutions at a wall to see what would stick. But the human mind is a delicate wall – throw too much at it and it will collapse.

At the time, I was working as a science journalist at Wired magazine, where I kept close tabs on developments in artificial intelligence (AI).

My professional interests, combined with the shock of my sister’s suicide, led me to begin exploring whether AI might have been able to help her.

In the process of learning more about the application of AI in psychiatry, I realised that these two fields were quickly merging in a way that threatened the freedom not just of patients, but of everyone else, too.

When my sister died, the use of AI in psychiatry was still in its infancy. Psychiatrists and computer scientists were collaborating to see if the way we interact with our phones, computers and other devices could provide the data AI needs to better diagnose and treat patients.

Since then, these psychiatric AI tools have leapt out of the lab and into the real world. We’re now on the cusp of a brave new era of psychiatric AI that threatens to entrap all of us in a digital asylum [given a diagnosis determined by technology].

We can see the emergence of this all around us – already it is in our homes and offices, schools and hospitals, prisons and barracks. Wherever there’s an internet connection, the asylum is waiting. There is no need for bars on the windows because there is no possibility of escape.

A troubling representative example of the new asylum is digital phenotyping, a new approach to diagnosing mental disorders. It involves harvesting data from your electronic devices and using AI to monitor and analyse your behaviour for signs of mental disorder.

Digital phenotyping doesn’t need to read your texts: it can use the arsenal of sensors on a modern smartphone to collect data on your heart rate, gait, scrolling patterns, facial expressions, speech tone, sleeping patterns, body temperature and other subconscious behaviours to create an impression of your mental health.

The people developing digital phenotyping and other psychiatric AI (or PAI) systems believe that they will bring more accurate diagnoses for mental disorders – and more effective treatments.

They also claim that PAI’s constant surveillance could reduce suicides, drug overdoses and other serious consequences of mental disorders through timely interventions that occur before people ever reach a crisis.

The potential of PAI has captivated clinicians and researchers. Investors are pouring in money to bring PAI out of the lab and into the real world.

In 2021 alone, investors put more than £4 billion into such technologies. And it’s not just startups that are building psychiatric AI systems – giant technology companies such as Apple, Microsoft and Facebook have launched programs to investigate the potential of these systems over the past few years. 

The problem is that these technologies can only work if they are everywhere and monitoring us all the time. This is the only way to ensure that at-risk people don’t fall through the cracks.

But this logic has also put us on the path to a future where we may all find ourselves patients in an algorithmic asylum.

The only response, once PAI has escaped the clinic, is submission to the system’s power to diagnose you and refer you for treatment. It is, after all, ‘for your own good’.

This is a frightening future, but the problems have less to do with technology than psychiatry itself.

The problem is not that AI isn’t good enough to help people with mental disorders, it’s that we don’t know nearly enough about mental disorders for these tools to deliver on their promise.

The risk is that by applying AI to psychiatry, we actually make things worse for patients – and everyone else, too.

One of the main problems with psychiatry is that it is the only field of medicine that has yet to uncover the biological origin for even a single disease it claims to treat. There is no test in a clinic that can reliably provide a diagnosis in the same way that a doctor might diagnose a patient with cancer or the flu.

Instead, psychiatric diagnosis is based on fuzzy concepts of mental disorders that depend largely on clinicians’ personal judgments about a patient’s symptoms.

Today, much psychiatric diagnosis relies on disorders described in the Diagnostic and Statistical Manual of Mental Disorders (DSM), otherwise known as the ‘psychiatrist’s bible’.

When its first edition appeared in 1952, the manual had only 32 pages describing around 100 disorders. The latest edition, published three years ago, is 1,142 pages long, detailing nearly 300 disorders.

But what most people don’t realise is that the categories of mental disorders described in the DSM are not scientific in the way that conditions treated by the rest of medicine are.

These diagnoses are not based on an abundance of data showing that these disorders really exist as disease entities in the same way that, say, cancer exists as a disease entity.

Instead, they are the result of professional fads, influence from insurance providers and pharmaceutical companies, and the pet theories of professional psychiatrists.

Psychiatry also stands out in medicine as one of the rare fields where patient results have got markedly worse rather than better over the past few decades.

Over the past 25 years, we have seen a remarkable increase in the prevalence of mental disorders, the prescription of psychiatric drugs and suicide rates.

These developments have been particularly pronounced in young people.

Today in the Western world, suicide is the third leading cause of death in those aged between 15 and 29, and diagnoses of depression, anxiety and attention deficit hyperactivity disorder (ADHD) have skyrocketed.

The alarming truth is that we still don’t know what these mysterious things called ‘mental disorders’ really are. We still lack a reliable system for categorising them into valid diagnoses, much less for prescribing treatments that reliably improve patients’ symptoms.

If we’re not careful, psychiatric AI systems will exacerbate the overdiagnosis of mental disorders, Daniel Oberhaus warns

If we’re not careful, psychiatric AI systems will exacerbate the overdiagnosis of mental disorders, Daniel Oberhaus warns

Until we have these basic foundations, we should be incredibly cautious about how we treat disorders that may not actually exist as valid medical entities. If we don’t really understand what we are treating, we run a serious risk of harming the people we’re trying to help.

So far, this warning has mostly gone unheard as clinicians, researchers and entrepreneurs rush to introduce psychiatric AI systems into the real world.

The risk we run is not just that AI won’t help patients, but that it will actively make things worse by applying bad medicine at an unprecedented scale.

If we’re not careful, psychiatric AI systems will exacerbate the overdiagnosis of mental disorders, resulting in millions more individuals receiving pharmaceutical or behavioural treatments for disorders they may not have – and which may not even exist in scientific medical terms.

This risk is magnified by the fact that for psychiatric AI systems to be useful they must be broadly applied to the general population, regardless of whether they currently have a psychiatric diagnosis.

This transforms the world into a digital asylum where our mental health is constantly being evaluated by an AI psychiatrist – a soft totalitarianism that justifies its existence in the name of our mental health.

This threat to our human rights is no mere theoretical concern. In 2018, this style of unrequested PAI suicide-prevention monitoring of social media users was trialled by Facebook in the US.

It resulted in the police conducting thousands of unsolicited and needless ‘wellness visits’ to people’s homes.

The European Union, in contrast, banned Facebook’s suicide prevention AI on the grounds that it didn’t operate with users’ consent and came with significant privacy risks.

Many of the claimed abilities of PAI surveillance seem to be taken as established fact.

But often those claims are dangerously false. Take, for example, the claim that psychiatric AI can read our emotions just by watching our facial expressions as we interact with our smartphone and laptop screens.

In fact, a growing body of research suggests that our facial movements have little correspondence with our emotional states.

In 2019, a group of researchers led by Lisa Feldman Barrett, a professor of psychology at Massachusetts General Hospital and Northeastern University, carried out an analysis of peer-reviewed psychology papers that examined whether this link exists. The review of more than 1,000 papers found no scientific support for the most basic assumption of this technology – namely ‘that a person’s emotional state can be readily inferred from his or her facial movements’.

‘Emotion AI systems … do not detect emotions,’ Professor Barrett said. ‘It is time for emotion AI proponents and the companies that make and market these products to cut the hype.’

So, is PAI doomed to fail? Not necessarily. If clinicians, technologists and policymakers can commit to developing psychiatric AI systems that are transparent, fair, accountable, explainable, secure and preserve your privacy, it may be possible to put these systems on a trajectory where it actually improves patient outcomes and maintains our rights.

But to make that happen doesn’t just require better AI systems; it also requires a better understanding of what we’re talking about when we talk about mental disorders.

We still don’t have any objective biomarkers for mental disorders and, until we do, it will be a fool’s errand to apply AI in this field.

However, time is running out. PAI is already beginning to make inroads into all of our daily lives, whether you’re aware of it or not. As these systems become embedded in clinical practice and our daily lives, it will only become more difficult to reform them.

If we get this right, PAI may indeed prove to be a potent salve for much of what ails psychiatry.

But if we get it wrong, we are liable to find that we are all eternal inmates in the digital asylum.

  • Adapted from The Silicon Shrink: How Artificial Intelligence Made The World An Asylum by Daniel Oberhaus, out now (MIT Press, £27).
  • For more: Elrisala website and for social networking, you can follow us on Facebook
  • Source of information and images “dailymail

Related Articles

Leave a Reply

Back to top button

Discover more from Elrisala

Subscribe now to keep reading and get access to the full archive.

Continue reading