- No Longer a Nincompoop
- Posts
- OpenAI is Messing with Google
OpenAI is Messing with Google
DALLE 3 + Multimodal GPT-4 + The Most Powerful Company in the World
Welcome to edition #25 of the “No Longer a Nincompoop with Nofil” newsletter.
There is a lot happening. Like, an unbelievable amount. This is reminding me of when GPT-4 was initially released in March. I’ll definitely release extra newsletters this week or next for paying readers discussing everything. A preview:
Multiple new open source models from some big players (they’re good!!)
OpenAI’s new 3.5 instruct model, why it’s so important (and good!) and what it tells us about LLMs and the “chatification” of AI
Crazy Innovation in AI art
AI’s water footprint
Tesla is an AI company, not a car company
The legality of AI generated code and images and how companies are taking on the law
The new landscape of the music industry
Meta’s new AI announcements
and a whole lot more. You can subscribe to my premium newsletter for $5 a month to not miss anything.
Here’s the tea ☕
OpenAI… 😮💨
So it has happened again. Just as it seems like things are catching up to OpenAI, they pull a rabbit out the hat. This time, not just with one ground breaking product, but two. So let’s talk about them.
p.s I’ve also launched Time x Money, a consulting firm specialising in transforming businesses with AI. If you want to learn how AI can transform your business and even develop tools to do so, feel free to reply to this email or email me at [email protected].
AI text + images = Magic
Firstly, OpenAI not only announced DALLE 3, they integrated it with GPT-4. This is a big deal. You don’t have to play around with prompts anymore. You literally just speak plain English and GPT-4 prompts DALLE 3 for you. This is the future of AI usage. You’re not supposed to play around with prompts or test to see what works. You say what you want, and AI figures out the rest.
You can create an entire comic book with a simple conversation. DALLE 3 might not be as good as Midjourney yet, but it’s more than good enough for this to be a very big deal. Storyboarding just got a whole lot easier. Midjourney has to get out of Discord soon or OpenAI will destroy them with distribution.
The UI of the future
GPT-4, arguably the most advanced LLM on the planet, can now see. You can take a picture, feed it to ChatGPT and then ask it questions. Not written questions either; you can have an entire spoken conversation with it. Now, don’t get too excited. LLMs can’t exactly view the world and infer meaning from what they “see” the way we can. So, the main question here is how good are GPT-4’s “eyes”?
Turns out, they’re bloody good. You can now get excited. Very excited actually. It is hard to overstate just how insane this is. Everything you thought AI wasn’t good enough for, you will have to re-evaluate. The rest of the thread is also fascinating. It can detect humans amongst other objects, but, as intended, it doesn’t make any comments on people. Don’t want another Google happening.
Also, this workflow now exists.
Take picture of website, app, some design
Feed to GPT-4V
GPT-4V spits out code for this design
This isn’t a hypothesis. This is reality. Look at all of these examples from just the first few days:
This isn’t a vision model that’s just really good for images. This is a multimodal model, meaning we will get audio and video at some point in the future. You know what we’re also getting? GPT3.5V.
We are not ready. Wild, wild times lie ahead.
By the way, in case you aren’t aware, OpenAI has had this since March. Imagine how good their internal model is… They’re having also having a developer conference on November 6th to preview new tools. What even is left to showcase? Who the hell knows.
You might be wondering, why did OpenAI release all of this now? Well, that’s a great question. Simply put, Google had already announced their new model Gemini would release end of this year and it would blow GPT4 out the water. So before they could even release and prove this, OpenAI went ahead and not only released their multimodal GPT-4, they released DALLE 3 as well, hitting both Google and Midjourney at the same time. Two birds, one stone.
What’s next?
So what’s next then for OpenAI? They just released two ground breaking technologies and are on an insane high at the moment. Where do they go from here? Well, a lot of places it seems. There have been very few moments in history where a company has all the power to essentially do whatever they want. OpenAI is at this stage right now.
What’s next? Hardware.
OpenAI announced a new partnership with Whoop, a wearable device that helps people track their health, fitness and performance. They’ve now created Whoop Coach which will use GPT-4 to give you insights, recommendations and essentially act as your 24/7 AI health coach. From what I’ve heard, Whoop is a good product so this is a big advancement in personalised, AI powered health coaching.
But why stop there? Why work with other companies and products when you can just build your own? This is the power of OpenAI - they build. They’re not stuck in planning and forecasting, they’re building and releasing. This is something Altman himself has religiously spoken about. It’s what I tell the companies I consult.
At this point in time, where anything and everything is up for grabs - speed is everything.
So what might they be building? How about a phone? This is a plausible scenario. OpenAI is in talks with legendary Apple designer Jony Ive to build new hardware for the AI age. Altman has already worked with Ive’s protege Thomas Meyerhoffer to develop the Orb Scanner for his Worldcoin scam project.
It’s hard to say whether OpenAI will actually release their own hardware product. Perhaps someone with better business acumen can enlighten me as to why or why they wouldn’t, given the following information.
OpenAI partnered with Whoop and are now a leading AI platform in the health field
Altman himself is one of the biggest investors in Humane, the AI wearable company founded by ex Apple employees, which I may have slightly made fun of
Are these two points inconsequential on whether OpenAI decides to build a hardware product? I don’t know. Although, I would be surprised if they built a phone of all things. I don’t think anyone will ever successfully release a phone to compete with the likes of Apple.
If anything, I imagine OpenAI is looking past the “hardware of the past” like phones, and attempting to figure out the UI of the future. Are phones bound to last forever? I guess we’ll find out soon enough.
Oh and if you’re wondering how OpenAI plans to finance all of this, not only are they projected to reach $1 billion in revenue this year, they’re raising money at a $80 to $90 billion valuation. For context, thats 3x what they were valued at in May. Five months to triple your companies value from $30B to $90B.
Not bad Altman, not bad.
Real power
If you had to pick one person who has profited most from this AI revolution, I think it could be Nvidia’s founder and CEO Jensen Huang. Nvidia is now one of the biggest companies in the world, and Jensen is now an incredibly wealthy man. Well, wealthier than he already was.. by a lot.
Truth be told, I’m only writing about Nvidia just so I can tell you this.
Earlier this month, Jensen Huang sold 89,000 shares of Nvidia, making him $42.8 million dollars. That’s a lot of money. Here’s the funny part. I want you to guess how many shares Jensen Huang actually owns in Nvidia. Genuinely, humour me and take a random guess. I’ll even give you a hint: it’s a lot more than 89k.
Are you ready?
He owns 86 million shares…
This day last year Nvidia’s share price was $122 USD. Today? $430 USD. The crazy part? It’s probably only going to continue going up in the long term. Nvidia’s new state-of-the-art chips aren’t even widely available yet either. Nvidia owns the market. Every company is dependant on them. They are single-handedly supplying the infrastructure to build the most powerful tech in human history. Now that, is power.
What'd you think of this edition? |
As always, Thanks for reading ❤️
Written by a human named Nofil
Reply