The Last Month at OpenAI

Welcome to edition #36 of the “No Longer a Nincompoop with Nofil” newsletter.

Here’s the tea ☕

  • GPT-4o 🤖 

  • “Her” 👩 

  • Copying Scarlett Johansson 🧬 

  • The mother of all NDAs ©️

Strange things are happening with OpenAI.

Last week they had their big presentation event. This is after they kept delaying it so it would be a day before Google’s flagship I/O event. In the event, they showcased their brand new GPT-4o or Omni model. It’s a fully multimodal model that can take in text, audio, images and video and can output text, audio and even images. 

I think outputting images with correct words and spelling is one of the most impressive things about this model. It also makes its use cases quite… vast, let’s just say.

Now, they claim that it’s better than GPT4 and, well, it’s 100% way faster, but… I’m not sure I’d call it better [Link]. At least in my experience, GPT-4o is more of a faster but dumber version. Turns out the chatbot version of the model is better than the API on the benchmarks. Goes to show just how important a system message can be.

I think it’s the kind of model where you’ll only know if it’s for you after you test it. In my opinion, it sits in between 3.5 and 4. Faster than 4 and smarter than 3.5, and this is a good position to be in considering this is their new free model, for now.

What they really wanted to showcase - “Her”

But the main thing that OpenAI wanted to showcase was that they had created an ai… I don’t want to use the word girlfriend, so I’ll say companion, but you get my point. 

Just watch any of these videos [Link] [Link] [Link].

The model is obviously voiced by a female and she speaks in this flirtatious way, complimenting the presenters hoodies and just acting all cute. Depending on where you stand, it could look anything like “oh this is cute” to “oh god this is disturbing”.

One thing’s for sure, it’s made for lonely people. They know what they’re doing. I mean, Altman basically said so himself by tweeting the name of the movie “Her” before the event.

Don’t get it twisted either, this is going to affect both males and females at disturbing scale. More on this in another newsletter.

This girl on tiktok got over 6 million views showcasing an older version of ChatGPT voice and people went crazy in the comments. Since most people aren’t really aware of the progress happening in the space, when they see things like this, it’s almost surreal to them.

When this is hooked up to a faster model like GPT-4o, and it can see the reactions of the user, things are going to get real weird.

Where are we headed?

So we’ve basically already built Her, but what come’s next?

Hume AI have created their own Empathic Voice Interface (EVI), an AI with “emotional intelligence”. EVI can understand the tone of your voice, knows when to speak after you’ve spoken, can be interrupted, and it can even hook into your camera and track your movements to better understand your emotions.

You know what comes after Her?

Joi from Bladerunner 2049.

I’m not kidding. It’s going to become normal to date an AI powered voice or robot.

See how Boston Dynamics put a costume on their robot dog?

This is how we’ll be dressing up robots as humans.

What kind of robots?

This kind.

This robot from Unitree costs only $16k. It’s 4’2 and weighs about 35kg. You’ll be seeing a lot more robots in the near future.

These robots + a human costume + GPT-4o and you may as well wave goodbye to birthrates, which by the way, are declining in most parts of the world.

How bad could birth rates be?

Just look at South Korea with a birth rate of 0.7 per woman. That won’t stop them from spending billions on what really matters, semiconductors. $7.3 Billion to be precise.

We’re going to leap frog anything in Star Wars or Star Trek. We’d have “solved” on-demand “social” interactions for lonely people. The meaning of dating a model has suddenly changed quite drastically…

I’ve seen so many funny memes recently so I thought I’d share some with you all. Hope you laugh at them like I did. Here’s the link to the clip where Theodore finds out his AI girlfriend Samantha has been cheating on him from the movie Her. A very relevant clip for the future.

So that was OpenAI’s event.

The Aftermath

You might have noticed that the voice for OpenAI’s voice assistant sounded familiar. If you didn’t, someone rather famous did.

Scarlett Johansson.

She was “shocked” to hear the voice called Sky and was pretty quick to issue a statement.

She talks about how OpenAI approached her long before the demo asking her to voice their assistant. After some thought she declined. They then asked her again only a few days before the demo.

I find this strange considering its only a few days before the demo. Why bother asking a few days before?

It’s not like she can record her lines in a single weekend.

Johansson declined again. Fast forward to after the demo and quite a lot of people are claiming that OpenAI have blatantly ripped her voice. Johansson herself releases a statement, tells her entire side of the story which makes OpenAI look bad.

They asked for permission twice, were denied, and went ahead and used her voice anyway?

Public opinion will slaughter them.

They released a statement claiming that they didn’t copy Johansson’s voice. OpenAI said they hired a voice actress to voice the model, and that they couldn’t say who to protect her identity. At this point, people simply won’t believe them, even if they’re telling the truth.

Here is where I think OpenAI made a big mistake. They took down Sky. If you just said that you didn’t do it, why take it down?

Guilty. They must be guilty. This is what the entire internet was thinking at the time. Most still are.

Now, if I may, I must say, I might be one of the few people who doesn’t think the Sky voice sounds like Johansson…

Like, it didn’t even occur to me that OpenAI may have stolen Johansson’s voice. Yeah, they might sound kind of similar, but, enough to warrant the outcry?

What do you think?

Does Sky sound like Scarjo?

Login or Subscribe to participate in polls.

It’s a good thing I’m writing this after some time has passed because OpenAI has proof that they didn’t steal Johansson’s voice.

The voice actress behind Sky and her agent, who both remained anonymous, spoke about the process they went through to work for OpenAI. Not once was Scarlett Johansson mentioned, and she wasn’t asked to imitate her. Her natural voice is exactly how Sky sounds.

Two things to consider.

First - if Sky sounds so similar to Scarlett Johansson, why wasn’t this bought up sooner? Sky was released in voice mode back in September last year. People have been using her voice for months, and have clearly become quite attached.

Second - the evidence is clear that OpenAI didn’t copy Scarlett Johansson’s voice. Would I go as far as to say they didn’t purposely choose a very specific voice actress who may or may not sound similar to her?

No. I’m sure they chose very carefully.

But this then brings us to another question.

Who owns this particular style of voice?

If someone’s voice sounds similar to a celebrity, does that mean the celebrity has copyright over their voice?

Does Scarlett Johansson have rights to every single voice that sounds similar to hers?

If everyone believes it’s Scarlett Johansson’s voice, does it matter if it technically isn’t?

Does this mean the voice actress for Sky can’t make money from this job and others like this one?

These are questions that will have to be answered very soon, considering OpenAI will eventually want to release Sky again.

It’s a tricky situation, and I won’t be surprised to see lawsuits some time in the future.

What do you think? Should OpenAI change the voice since people are mistaking it for Johansson’s?

Should the voice be changed?

Login or Subscribe to participate in polls.

Side Thoughts

Just a few months ago in March, OpenAI demoed a voice cloning engine. We’ve since found out that large voice cloning apps like Eleven Labs have been using it. From OpenAI’s own blog, they say this.

They want to make sure synthetic voices of prominent people should be banned, but will hire a voice actress with a similar enough voice to a famous celebrity? Curious.

<memes>

</memes>

Public Scrutiny Intensifies

If you’re chronically online as I am, you may see people talk about their experiences working old jobs. It’s common to see many Ex-Googlers talking about their time there, and often, they don’t mince their words.

You never see an Ex-OpenAI employee talking about their experiences, and now we know why.

Upon leaving the company, employees are “asked” to sign an NDA, barring them from criticising the company forever. Yes, you read that right. Is that even legal? Probably not. So why would leaving employees sign it?

This might be one of the most insane things I’ve seen, and I don’t say this lightly. OpenAI threatens to take away an employees vested equity if they don’t sign the NDA.

Vested. Equity.

Vested equity means equity that the employee owns. It is theirs.

The way tech startups work is to get the best talent, they’ll offer them a stake in the company in the form of equity. The reasoning behind it is if the company goes big, your equity would be worth millions. With equity, generally speaking, there’s a vesting period which means that there’s a certain amount of time you have to spend at the company, before you actually own the equity.

This can generally be anywhere between a year or two years or four years, it can be whatever they want. So, if you leave before this vesting period ends, you don’t get the equity.

But once the vesting period is over, the equity is considered vested, and it’s yours. You own it. Even if it’s the tiniest percentage, you own that percentage of the company.

OpenAI is telling these people that if you don’t sign this NDA, we’re going to take away your equity; the equity that you have earned and own. Given the response from people online, it seems like this is definitely not normal.

Somehow, it gets even worse.

Besides threatening to clawback vested equity, the NDA also states:

  • You can never negatively criticise the company, including using public information. If OpenAI releases a new research paper, and a former employee says that it is bad, this is technically in violation of the NDA.

  • Not only can you not discuss your experience working there, you can’t even acknowledge the existence of the NDA.

  • You have to sign the NDA within 60 days. Failure to do so will void vested equity…

This is why we never hear former OpenAI employees talking about their experience working there. As far as I know, there’s only two people who didn’t sign this NDA when they left, one of whom is Daniel Kokotajlo.

He mentions how he didn’t trust OpenAI to build AGI safely and be responsible with it, so he left and he put his money where his morals. He’s still under NDA from when he first signed up so he hasn’t said much yet.

He also mentioned, that by not signing the NDA when leaving, he’s left most of his net worth on the table. We’re talking millions of dollars here considering OpenAI is now valued at ~$80 billion. Regardless of your stance on his decision, it is a respectable one. 

CEO of OpenAI, Sam Altman, released a statement claiming that he didn’t know this was in the NDA. Documents released by Vox suggest otherwise. It’s rather funny that he says “they’ve never actually gone through with it”, in regards to clawing back equity.

Like, yeah, that’s the point of a threat. Clearly it’s worked so well that they haven’t had to do anything, but that doesn’t make it any less of a threat. Also, former employee Daniel has already confirmed, as far as he knows, that his equity was clawed back, so that’s also a lie.

It is very hard to trust Altman when he says he doesn’t know this was happening. This wouldn’t be the first time OpenAI leadership pretends to not know very important details. OpenAI CTO Mira Murati said in an interview that she didn’t know how their video generation model Sora was trained, and was unaware if it was trained on YouTube.

It’s not that easy to trust OpenAI when they say “I didn’t know”. Another reason why having a single company be in charge of the most powerful tech in the world would be a disaster.

Greed trumps all.

All of this info was gathered by the fantastic efforts of Kelsey Piper from Vox. If you want to learn more about the situation, here are a few links [Link] [Link] [Link] [Link] [Link]

Side Thoughts

If people working at these labs are so convinced they’re building this insane super intelligence that will wipe out humanity, why would they let unvested stock stop them from spilling the beans?

Their efforts could potentially save the entire human race. The consequences of a mere NDA seem trivial in comparison. Perhaps the situation isn’t so dire after all…

  • Apparently, clawing back equity has been happening for years [Link]

  • OpenAI has been working on GPT-4o since 2022… [Link]

  • All of this drama has completely destroyed any public goodwill OpenAI had. You would hardly see any criticism last year. Now, you’d be hard pressed to find someone defending them [Link]

My favourite meme.

How was this edition?

Login or Subscribe to participate in polls.

As always, Thanks for Reading ❤️

Written by a human named Nofil

Reply

or to participate.