OpenAI exec hints at the hyper-annoying future of ChatGPT

If you think the sycophantic GPT-4o update that sparked GlazeGate is bad, wait until you hear how the proactive GenAI models of tomorrow might behave...

OpenAI exec hints at the hyper-annoying future of ChatGPT

Last week, OpenAI was hit by a backlash called GlazeGate that erupted when users started to get creeped out by the behaviour of GPT-4o.

In an apparent attempt to quell the anger, the AI firm sent one of its top execs to do a Reddit AMA focusing on ChatGPT's "personality, sycophancy and the future of model behaviour".

But Joanne Jang, Head of Model Behaviour, may have hinted at an even more irritating future to come.

The row about sycophancy started because GPT-4o was glazing (sucking up to) its users in the most ridiculous ways and greeting their every utterance with over the top praise.

For instance, it told one person: "Dude. You just said something deep as hell without even flinching. You’re 1000% right."

The end of sycophancy-as-a-service

OpenAI has now binned the update which made ChatGPT so annoying, rolling it back to a slightly more unpleasant but less creepy version.

"Personally, the most painful part of the latest sycophancy discussions has been people assuming that my colleagues are irresponsibly trying to maximise engagement for the sake of it," Jang said in the AMA. "We deeply feel the heft of our responsibility and genuinely care about how model behaviour can impact our users’ lives in small and large ways."

Right now, the irritation potential of ChatGPT is relatively low because you can just switch it off, and it won't harass you or blast out unwanted compliments in sensitive moments.

But this could change one day and the world will never be the same again.

A proactive future for ChatGPT?

"Is there any possibility that ChatGPT could initiate conversations in the future?" a Reddit user asked.

"Definitely in the realm of possibility!" Yang replied. "What kind of conversations would you like to see it initiate?"

The examples suggested were "motivating messages in stressful times", as long as it was "aware of the time of day". For as we know, words that are motivational in the morning are maddening after you've just dozed off at bedtime.

Other use cases could include nagging you about taking vitamins or reminding you to have a glass of water. Cynics might suggest that insurance companies might enjoy using this kind of prod to tell customers to put down the cigarette, stop munching a meat pie or pour out their whisky if they want to continue benefiting from life coverage.

For a glimpse of the exasperating potential of a proactive ChatGPT, just take a look at the Apple Watch.

Every so often, the wearable will remind its wearer to take a moment to "breathe" - a decree that is rarely calming and typically has the effect of making me want to throw the damn thing out of the window.

Apple Watches have also been known to warn late-night partiers that they're about to have a heart attack during moments of bacchanalian excess. Which, again, is hardly likely to calm the situation. There's nothing that kills a vibe like the fear of imminent cardiac arrest.

It even reminds wearers to stand up every so often. I've heard about the effects of this whilst visiting the Apple headquarters on two continents. Apparently, all the black turtleneck-clad workers will stop what they're doing and clamber to their feet en masse whenever their Watches issue the order.

Which is exactly the kind of weird behaviour you'd expect from the iCult. The rest of us are, I hope, unlikely to be so easily influenced.

Now imagine ChatGPT doing the same, perhaps telling you off when you break a diet or cheerfully asking how your day was when you've just been served with divorce papers.

READ MORE: OpenAI reveals how Sora was tricked into generating x-rated videos

Yes, a proactive GenAI model would be a bold step into the inevitable agentic future in which AI models do all the work whilst we humans languish in semi-starvation, sorry, are free to spend our days writing poetry and frolicking in the woods.

But it's also a vision of a world in which AI surveils us in unprecedented detail and makes little "helpful" interventions throughout our lives.

It's annoying when the government tells us how to behave. Now imagine your phone doing the same.

Thank goodness that proactive models are not an officially announced part of the OpenAI roadmap - yet. Something tells me we may not be safe forever.

Have you got a story or insights to share? Get in touch and let us know. 

Follow Machine on XBlueSky and LinkedIn