GPT4-o, Her & The Turing Test

5 in 5 - Brave & Heart HeartBeat #203 ❤️

This week the release of GPT4-o (the o stands for omni) launched us all once again into AI turmoil and hysteria.

We’ll be looking at what it can do, how does it pass the Turing test, and why is it doing it in Scarlett Johannson’s voice.

Plus, what happened to the safety team, and do we really need AI skills to get ahead at work now?

Let’s get into it.

Were you forwarded this? Not a subscriber? 👉 Sign up here


#1 - Introducing GPT-4o

The newest version of ChatGPT, ChatGPT-4o (the “o” stands for “omni”) has been released, and it’s freaking some people out.

So, what can it actually do?

First of all, they’ve created an integrated desktop app which will allow you to converse with the model by using the command “Hey ChatGPT”. Original.

OpenAI wanted to make using ChatGPT seamless, and so they’ve made this model as conversational as possible. So conversational, that it apparently passes the Turing test, but more on that later.

When you ask ChatGPT a question, you don’t have to wait for a response, you can interrupt the model to ask more questions, and it also apparently “picks up on your emotions”.

You will also be able to interact with the model via video.

For example, you can share live footage of a maths problem and ask for help, and ChatGPT can either simply give you the answer or help you figure out how to it yourself. They may have just made a whole career obsolete in the form of private maths tutors…

You can also share screenshots, photos and documents and analyse data by uploading charts or code and asking questions about them.

Mira Murati, OpenAI’s Chief Technology Officer, described the demo given last week as “the future of interaction between ourselves and the machines” stating that OpenAI is trying to shift how we use AI into a collaboration and an experience which feels much more natural.  

Compared to the cute little questions and answers model we think of when we think of the original ChatGPT, an integrated Siri-type assistant capable of fielding the most complex prompts is worlds apart.

We can’t wait to try it, but are also a little bit freaked out.

We’re Only Human…



#2 Bringing The Movies To Life

The newest version of ChatGPT is, according to Sam Altman, the AI that you always dreamed of, that lives up to the AI “from the movies”.

But the key question here is, which movie?

We have a couple of contenders, firstly the Will Smith classic iRobot. Can a robot write a symphony? Well, yeah, it can now actually.

Most concerningly, many parallels have been made with the movie Her. Including the not so subtle hint of Sam Altman tweeting one single word, “her”, the day GPT-4o was unveiled.

The Guardian published an opinion piece following the GPT-4o demo titled “What’s Up With GPT’s New Sexy Persona?” and honestly, good question.

While apparently a lot of male-authored articles about the new release glossed over it, an article written by Parmy Olson in Bloomberg laid out why a flirty female voice assistant might not end well for us.

Olson asked what are the social and psychological consequences of speaking to a flirty, fun, agreable voice on your phone all day and then encountering a different dynamic with men and women in real life.

We already know the answer.

The social implications of female voice assistants like Siri and Alexa have been studied for a while now. In 2018, a US sociology professor warned that the gender of virtual voice assistants sends a message about gender norms at a massive scale, and described it as a socialisation tool that teaches us that the role of women and girls is to respond on demand.

And now, according to GPT-4o, it’s also to be flirty.

Great…

Because The Movie Her Definitely Ends Well


#3 - The Turing test

Does the new GPT4-o pass the Turing Test? Apparently. But what does that actually mean?

The Turing Test is a benchmark for AI systems which is supposed to determine how human-like a conversational model is.

Coined by and named after legendary mathematician Alan Turing who decreed that an AI system capable of generating text that fools humans into thinking they’re having a conversation with another human would demonstrate the capacity for thought.

Basically, I talk like a human, therefore I am (capable of thinking like one). And if an AI can think, well, I know we bang on about it a lot but we’ve all seen the movie iRobot. (And if you haven’t, close this window right now and go and watch it, it is now vintage 2000s sci-fi gold).

Vitalik Buterin, co-founder of Ethereum, the blockchain technology used in cryptocurrency exchange, tweeted that GPT-4 passed the Turing test, and the figures back him up. According to research, humans mistook GPT-4 for a human 56% of the time. So, more often than not.

In our current culture “passes the Turin test” is a phrase often thrown around, and it still carries a lot of weight. However, Turin theorised this “test” in the 1950s, and it’s less an actual “test” and more of an idea – there are no guidelines or actual measurable definitions.

There is also no actual scientific consenus on whether a machine could be capable of “thought” in the same way as a human being, never mind what Alan Turing said 75 years ago.

Conversing with GPT4-o certainly does seem more like conversing with a human than any clips of Mark Zuckerberg talking to anyone at all, but is that really a good barometer to judge it by?

No Offense Alan



#4 - Where Has The Safety Team Gone?

Wow, with all these new advances OpenAI are making – sexy chatty voice assistants, maths expertise and not to mention potentially creating a machine capable of thought – it’s a good job they have a robust safety team in place.

Oh, wait, no they don’t.

Announced in July last year, the “superalignment team” that openAI were making a priority has been completely disbanded.

Several of the researchers involved have left, including Iya Sutskever, OpenAI’ chief scientist who cofounded the company along with Altman. He was also, famously, one of the four board members who tried, and succeeded, albeit for a short while, to oust Sam Altman from the company.  

Following the launch of Gpt4-o, Sutskever and his co-lead on the safety team Jan Leike both left the company, and although Ilya isn’t saying much about it, Leike posted the damning phrase that “safety culture and processes have taken a backseat to shiny products”.

His reasons for leaving also included the fact that the team had such limited resource to get their research done that at times it was impossible, despite OpenAI’s promise last year that the team would be getting 20% of their computing resources.

Great. Fantastic. We’re not worried at all.

Who’s Steering This Ship?


#5 - No AI Skills? No Hire

With this latest big change in the AI landscape, the way we work may once again be about to shift.

Part of this will, as always, be due to the hype, and is showing up as early as the hiring stage in the current job market.

A report from LinkedIn and Microsoft showed that 66% of leaders wouldn’t consider hiring candidates lacking AI skills, and that 71% would be more likely to choose a less experienced candidate with AI capabilities over a more experiences one without.

Employers are prioritising AI, hoping to future proof businesses with the skills that everyone may need to be using pretty soon.

However, time out for a second.

What did we learn last week about fake news? Do LinkedIn and Microsoft have an agenda for bringing us this information? Absolutely, LinkedIn Learning has courses on how to use different types of AI at work, and Microsoft have some AI to sell you in the form of copilot.

According to the same report, 75% of knowledge workers use AI in the workplace already. For or better or for worse, as just because we CAN use AI to do something doesn’t mean we should.

These newsletters, for example, express OUR opinion – we don’t use AI to write them. Would an AI model work in as many zingers about Mark Zuckerberg into our articles as we do? Doubt it, and where’s the fun in that.  

If You Can’t Beat Em, Microsoft Wants You To Buy Copilot.


Brave & Heart over and out.

Bonus

What If Your AI Girlfriend Hated You?

Do you want an AI girlfriend that comes closer to your real-life experiences by absolutely despising you? Search no longer.

AngryGF  offers a constantly enraged girlfriend which is apparently designed to help men with their communication skills through gamification.

Colour us intrigued and concerned in almost equal measure.

Check It Out


To find out more on how you can retain your top talent, or how we can help you with digital solutions to your business and marketing challenges, check out our case studies.


Previous
Previous

TikTok Denial, AI Assistant Ethics & Bye I

Next
Next

Apple’s Crushing Error, Aggressive Algorithms & How To Sift