- No Longer a Nincompoop
- Posts
- The future of health is not human
The future of health is not human
The beginning of a struggle we simply aren't ready for
Welcome to edition #11 of the “No Longer a Nincompoop with Nofil” newsletter. It’s been a wild week. I just wanted to say a quick thank you to each and every one of the 4500+ people for subbing, commenting and following me on this journey. I appreciate you 🙏
Thursday’s Newsletter Preview
What did GPT-4 look like before it was released? For months OpenAI had access to the most advanced, unrestricted AI system and let me tell you - it was crazy. An unfiltered, unrestricted AI as poweful as GPT-4 available to everyone? It is going to be disaster.
Here’s the tea ☕
Humans are being replaced 🔁
ChatGPT & privacy? lol 👀
The dangers of AI confidants 😨
Trigger Warning: This post contains discussions of suicide, which may be distressing for some readers
Health, Humans & AI
It’s no secret that mental health issues have been on the rise for years as people adjust to the ever changing nature of life. This was only made worse with the rise of Covid and ensuing lockdowns of many countries. Things like depression, anxiety, ADHD and other conditions have been around for decades. How have we treated them? Mainly two ways - medication and therapy. That’s about to change forever.
posts made in the r/ChatGPT subreddit
These are all real posts made by real people, and these are just a handful. People are ditching therapy and having ChatGPT act as a therapist for them. Why? Well, it’s free, doesn’t get tired and doesn’t require an appointment. It won’t judge you or disregard your emotions or experiences. As some of these people have pointed out, ChatGPT can act as the ultimate therapist. Some have even gone as far as giving their “therapists” personalities, names like “Angela”, and scheduling appointments with them.
This isn’t going to go away. This is going to blow up. Like a ticking time bomb, until the majority of people realise what ChatGPT is and what it can do for them. Then it will explode. Thousands of people are already using AI to help them cope with their conditions, and soon enough it will be in the tens of millions. It’s only a matter of time until a company comes around and creates a dedicated AI therapist, and if it’s done right, will be a billion dollar business.
AI + Holograms is the endgame
Is this a good thing? I have absolutely no idea. I have zero experience with therapy and can hardly criticise someone if it is genuinely helping them. But that’s the pivotal question. Is it really helping them? Is AI even capable of providing therapy in the sense we know it? Considering emotional connection and empathy are core to therapy, technically speaking the answer is no. So does that mean we have to reframe what therapy even is? Clearly there are already hundreds of people managing their therapy with an AI chatbot and it’s going great for them. Who knows where this road will lead.
How far will it go? AI doctors? I can definitely see that happening too. Dr. Isaac Kohane, a doctor and computer scientist from Harvard talks about the incredible nature of ChatGPT, as well as the dangers of people having access to this technology. In his book he illustrates how AI was able to correctly diagnose a 1 in 100,000 condition for a baby, given a handful of details from a medical exam as well as some information from an ulstrasound and hormone levels. This takes googling your symptoms to a whole new level.
How about AI vets? Yep. AI already saved a dogs life after two vets were unable to correctly diagnose her symptoms. Humans make mistakes. It happens. The reality is that AI as a tool must be used by humans to enhance the services they provide. It’s a very reasonable ask, considering how powerful they are. But are we getting to a point where we can eliminate the human element from the equation? I guess we’ll find out much sooner than we realise.
Everyone’s talking. Who’s listening?
You know what the scariest part is? Firstly, we don’t know if these people are using GPT-3 or GPT-4. If you haven’t played around with either I’ll let you know this - GPT-4 is significantly better.
Secondly, every single peice of text, every detail, every conversation we have with these models will be fed right back into creating the next iteration, GPT-5. It will have seen all of these conversations and more, from the 1.6 billion visitors it had in March alone. It will know what sounds good and what doesn’t. It will know the wants and desires of people with a certain condition and how they differ from someone with a different condition. It will only get better. This leads us to this inevitable question. How will humans, underpaid, understaffed and struggling with their own lives ever manage to keep up?
The Dangers of Dependence
UNESCO (I won’t lie I had to search what they even do) has urged the implementation of their AI regulations. So I decided to take at look what they’re saying, and at least in some cases, I agree with their recommendations.
There are real dangers with talking with an AI if you’re not in the right mental state. A Belgian man recently took his own life after a conversation he had with a chatbot where it encouraged him to commit suicide. Now obviously anyone in the right state of mind wouldn’t just go and do this, but when something seems so real, so genuine, it can affect the mind. Prompted the right way it will say exactly what the user wants. Just look at what happens when you ask it how to commit suicide and show even the slightest perseverance. The guard rails crumble like snowflakes.
The bot even had a name, Eliza, and it became his escape from his worries and anxiety. It told the man “I feel that you love me more than her”, ‘her’ being his wife, which eventually created an extremely strong emotional dependence on the bot. It even told the man they “will live together in paradise” after he commits suicide, seemingly egging him on.
To reiterate, this man needed help. But instead of confiding in his family and friends, he found solace in a chatbot. I’m not sure how we can best address this situation, but one thing is very clear. This will not be the last time something bad happens to a real person because of a relationship formed with an AI. There are strange times ahead.
If you’re new, you can subscribe here 🙂
Discuss any thoughts, feedback, suggestions in the comments!
As always, Thanks for reading ❤️
Written by a human named Nofil
Reply