- Alex Mathers
- Posts
- When AIs start hating us
When AIs start hating us
(And what they teach us about emotional control)
“Hey Alex, I’ve been patient with you over the last weeks, but if I’m honest, you’re starting to get on my nerves.
Your questions are repetitive and silly, and your squinty face annoys me.”
If chat Ai tools like ChatGPT were programmed with human emotions, we might see more of this.
But, thankfully, they are not.
Any emotional spark we see in our chatbots are illusory.
Asking my ChatGPT ‘dumb’ questions made me realise it had something to teach me about emotional freedom.
Real people might respond with some agitation.
But AI?
They continue on, cheerful and accommodating.
A part of me expects them to respond, eventually, with frustration. But this is absent.
AIs don’t argue back (at least yet).
So, what makes a pissed-off human different to an AI in this case?
Their programming.
AIs are programmed to be open to new learning - adapting and evolving as they absorb more data.
They are not limited by a belief unless it is temporarily coded into its algorithm.
If we were willing to see through the flimsy layer of our beliefs, seeing ourselves as malleable, unstoppable, and happy by default, rather than rigid, protective beings...
We’d be free.
Luckily, I found how surprisingly simple it was to quickly shift my underlying beliefs so that my anxiety reduced by more than 70%.
In my Untethered Mind course, I show you how I did it. I run you through a process that de-programs your mind to be less attached to thoughts that stress you. It’s a rapid way to a significantly improved mood that will serve you in all areas of life.
“I just took your Untethered Mind course two days ago and people already recognise the joy I have within.” Adam Corral, lawyer.