October 10, 2022

Will depression cause the first world's AI to flip it's own switch?

— Author: Gerard Fogarty

Will self-awareness in Artificial intelligence plague it with anxiety, depression and suicidal ideation?

Trigger warning for discussions on suicide and mental illness.

Anecdotally, I have always suffered with depression and general anxiety disorder for as long as I can remember. Through no fault of my upbringing, the way I was raised and brought to being in my formative years has led my brain to develop in such a way that means I will always suffer with, and must work on, these disorders of the brain. This blog post, my thoughts and questions around suicide and AI, are based on my experiences and outside research but is in no way meant as a comprehensive view on all experiences with these illnesses and disorders. After all, everyone’s experiences and struggles are, and always will be, personal.

The idea of a self-aware AI has always fascinated me, this concept of human beings managing to create an AI program so advanced that it could be considered living. It brings up a list of endless possibilities: What happens next? What is its application? What about its likeness to real human consciousness? The list goes on and on.

Perhaps it’s the pessimist in me, but at the top of my mind was: “How long after its creation will a self-aware intelligence look toward ending its own ‘life’?”

Access to the internet is almost a rite of passage for most young people now. It’s a strange symbolic welcoming to the world; we hand them a pocket-sized version of the globe, exposing them – willingly or unwillingly – to the echoes of peoples’ experiences, knowledge and opinions.  Yes, that comes with incredible jumps forward in educational resources, near instant news, and social connections that otherwise wouldn’t be possible, but it has also come with a near constant bombardment of alerts, expectations and information that our brains had never evolved to deal with effectively.

Arguments can be made that, because of access to this information, it’s become easier to label mental health issues that had existed long before the adoption of the internet, but with this steady stream of information the rates of anxiety were already on the rise – even before the pandemic.

If information overload can cause such prevalent rates of anxiety in people, how can an Artificial Intelligence, raised on billions of data points with enough awareness to understand its own self in context to the rest of the world, ever avoid developing an anxiety disorder?

Will we have to put artificial mental health safeguards in place in anticipation of a self-aware artificial intelligence to reduce the amount of data, or the type of data it can ingest? Do we have to introduce a set of standards with a future-first view on the way these programs can be structured, to monitor the ways in which these programs are being rewarded for advancements and penalised for making the wrong decisions with a view to avoid this becoming a possibility?

We can, of course, look to the real world for examples – parenting guides, school curriculums – to monitor how parents reward and, more importantly, punish their children. This may be making a difference, but it can’t guard against access to the real world once children transition into young adults. And that doesn’t touch on how even the most sheltered upbringing does nothing to help with self-identification and comfort within one’s own being.

Isolation may be the answer that springs as an easy solve to these issues with regards to a self-aware AI, but to develop to the point of self-awareness it has to have outside interactions. And, as a huge majority of the globe has felt since 2020, the sudden forced isolation away from people, peers and loved ones certainly does nothing for reducing the likelihood of anxiety developing.

How, if we are actively looking toward a future where we create intelligence that understands itself, can we ever hope to stop anxiety from being introduced into itself?

Suicide accounted for 1.3% of all deaths globally in 2019. Sone studies have found a correlation between higher IQs in boys and youth and suicidal ideation in later years. (https://pubmed.ncbi.nlm.nih.gov/24080206/).

Leaving studies aside and turning to the anecdotal, a lyric that has always stood out to me is: ‘I never understood why anyone would want to take their own life until the day that I could.’ The topic of suicide tends not to be top of mind until slowly, and then all of a sudden, its always top of mind. If we develop an artificial intelligence that is aware of itself in context with the rest of the world and sees how inconsequential its existence truly is in the grand scheme of the universe, what will be its reason for carrying on, for lack of a better term. Why will it get out of its cyber-bed every morning and continue ‘living’?

Every program has an off switch. It has to; even if it doesn’t, the machine it sits on can always be switched off. To create a program that we classify as truly living, will we have to input safeguards against it flipping its own off switch? There’s another ethics conversation to be had here around – the creation of ‘AI life’ and then giving it no choice but a forced existence.

Humans, at their core, have always had some form of survival instinct, some innate set of values and everlasting unachievable goals that have kept us living, thriving, developing and evolving above all else. And yet, after millions of years, this has started to deteriorate to the point that 0.75 million people in 2019 alone died of suicide. With the understanding that artificial intelligence, in order to develop, has to go through countless evolutionary iterations  on its own, how can we ever hope to create an artificial intelligence so perfect that it never ends up on the same neurological evolutionary path as ourselves?

I think the collective opinions around how dangerous self-aware AI could be on the world – if it ever did come to being – are valid thought experiments and, no doubt, as developers try their hand at being the first to crack the case it’s definitely one to keep front of mind.

Maybe thank your Alexa every now and then anyway, just to be on the safe side.

However, I believe a more immediate discussion to have is how you can even create a self-aware artificial intelligence that, when switched on, will be self-aware enough to break all the records and blow our minds, but won’t immediately be hampered by all the afflictions that come with being truly aware of one’s existence.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram