Artificial Intelligence Is Here And It Wants To Revolutionize Psychiatry
Source – forbes.com
The rapture of the machines. The subject of fiery debates and endless banter. Everyone is talking about how robots are going to get all our jobs and take over the entire world as soon as they get brains of their own. But that’s not how the real world is supposed to work. In the real world, real people must make conscious decisions as to how and when robots will be deployed in particular industries in ways that better the lives of the humankind. But in order to do that, we must all ask ourselves the question, what are robots good for?
A robot isn’t functional in ways that a natural animal is functional. It can’t climb a flight of stairs, nor can it make intelligent deductions with regards to subjective matters as well as a human can. Don’t get me wrong, we do have robots that can walk around on two feet or scour the internet for fake news, but they are not nearly as effective as real humans in their position are expected to be. The amount of resources and hard work that goes into a robot to make it walk like a real human being is simply wasteful, we all know that artificial intelligence that helps fight fake news is not nearly as good as advertised. The reason behind this is simple, machines have a specific skillset, and they work best when they are implemented based on that skillset. Artificial intelligence, as it stands today, is not good at handling a lot of different tasks at once. It can, however, be exceptionally good at performing a singular task, like playing chess or recognizing objects within images, with greater accuracy than even humans can offer.
There’s also one more task that artificial intelligence seems to be particularly well-equipped at handling: it’s psychiatry. For several decades, psychology has been considered to be the one subject that is neither scientific nor humanitarian, but standing at the junction of the two. It is rather telling that machines, products of years of scientific and technological advancements, should be so good at understanding the details of the human mind, a lot of which is largely a humanitarian subject, out of the bounds of mainstream science.
“What is really interesting is the way that apps will be able to prompt behaviour and therefore change physiology, emotion and thought. The combination of homeostasis and entropy means that human behaviour sinks toward ease. Apps can nudge us long before problems evolve and even coach us toward excellence. When we are distressed, they can recognise this and help us out. Apple Watch nudges about 15 million people every day – calories, movement, standing, sleeping and breathing. Put this all together and we already have a massive pressure toward better health – physical, mental and emotional.”- Dr. Sven Hansen, Founder of The Resilience Institute
In a paper published in April, Colin Walsh, a professional data scientist at the Vanderbilt University Medical Centre, detailed the early stages of his work on a new artificially intelligent algorithm. Using a stream of data that is publicly available via hospital records and local registers, his algorithm can, with up to 90% accuracy, predict the likelihood of someone making an attempt on their lives within the next few months. His research, while still in its infancy, means a great deal to doctors and psychiatry professionals dealing with patients with suicidal intent, being able to understand and stop a patient who is about to take their own life before they can do anything serious. Walsh is hardly alone in his efforts. Facebook, the multibillion-dollar social media platform that forms the centre stage of our presence in the digital world, recently tested an algorithm that scoured through posts and status updates in order to flag people at the risk of committing self-harm, thereby notifying their family members long before anything bad can happen.
This wasn’t the only time the social media giant trained its machine learning algorithms to pursue the realms of human psychiatry. Woebot, a revolutionary chatbot that runs inside Facebook Messenger, was recently released by a bunch of researchers from the Stanford University. By having regular conversations with its users and tracking their mood via videos and word games, Woebot can function as your very own digital therapist, making assessments and recommending treatment based on your psychological condition. Similarly, Tess, an intelligent software that communicates with you via text messages, has also been known to administer psychotherapy to patients of depression, emotional instability and so on. It comes from the house of X2_AI and is priced at $50 a month.
These are just a few of the many ways in which researchers and psychiatry professionals have used artificial intelligence as a key tool in diagnosing and treating mental health disorders in recent times. Researchers at Harvard University and the University of Vermont recently showed us how we can diagnose depression based on the photos people uploaded online. Scientists at the University of Texas recently outlined the use of computer vision and artificial intelligence to help detect ADHD in children. The list goes on.
“As a neuroscientist, I want to understand the brain. Beyond just the physical structures of neurons and the synapsis, but how it works. How is it that we think? How is it that 2lbs of protein and water can produce this amazing, complex organ that literally drives humanity? Ultimately, behavior is what the brain is for. We, as the scientific and medical community, are studying behavior with the same types of computational approaches that we use to study the physical attributes and workings of the brain.” – Guillermo Cecchi, Biometaphorical Computing at IBM Research
Despite everything that we have done to prevent this, there is no denying the fact that psychological disorders are still largely stigmatized. People in need of attention from a mental health professional often fear being ridiculed and judged, the consequences of having to share your deepest secrets with the human sitting opposite the couch. With machines, however, people feel a lot more comfortable sharing their true feelings and innermost secrets knowing that the thing on the other side is not their to judge, only to help. Can artificial intelligence become the next big thing for psychology? Only time will tell.