OpenAI is Laying the Foundation of the Future

A Paradigm Shift is Emerging. Will OpenAI replace Google?

Welcome to edition #28 of the “No Longer a Nincompoop with Nofil” newsletter.

Here’s the tea ☕️ 

  • OpenAI dev day 🤖

So OpenAI had their dev day earlier this week and like everyone else, I watched in awe as they made one announcement after another. Then, like other writers, I put together a newsletter talking about all of their announcements and how amazing it was and how crazy things will be. I even called it the biggest day in AI since GPT-4.

But something wasn’t right. It felt off.

That newsletter has been sitting in my drafts for a few days, and now I feel comfortable writing something completely different.

I started this newsletter to keep people informed on AI and that means not getting caught in the hype.

So, let’s talk about it.

A new GPT-4

So OpenAI announced GPT-4 Turbo - a faster and cheaper version of GPT-4. They also announced a massive new context window, 128k, a direct competitor to Claude 100k.

So this sounds really cool, and it is! A faster and cheaper GPT-4 is great for building, and with a much larger context window - we can do more with it. In theory this makes sense - but how well does it actually work?

Quick note
What’s a context window? How many tokens we can pass to a model. What’s a token? A machine representation of words.

Firstly, one of the reasons why so many people have been complaining about the decline of GPT-4 for months is because OpenAI has been using this turbo model as the default. In terms of quality, many people have found the turbo model to not be as good. I think it’s context dependant. One suggestion could be that the turbo model actually is better overall, but since so many people have very specific use cases, they’ve found the turbo model to be lacking and would rather use the og GPT-4.

Maybe the difference between the two isn’t a big deal for most, and it won’t be. I mean, for now, GPT-4 just got cheaper than its only competitor… GPT-4. Plus the larger context window means people can send way bigger messages in one go - like massive parts of a book or research papers.

Why don’t all models just have massive context windows?

Naturally, the more tokens a model can take in, the more information it can process. How great would it be if a model could take in a million tokens? We could feed it thousands of documents and it would then understand all of them. That’d be pretty cool right? Unfortunately, it doesn’t work like that. Here lies the problem:

The larger the context gets for a model, and the more tokens you feed it, the models performance eventually decreases.

Side note: Don’t confuse this statement with the process of training the model. This is for when the model is already complete and we are giving it information to process.

So does that mean if we feed GPT-4 with the max amount of tokens (128k) its performance will decrease? Yep. Someone has already tested it and found that after the ~73k token mark the model starts to degrade. I’m going to talk more about this in another newsletter.

Okay, so the Turbo model is cheaper than GPT-4 but might not match it in performance, and the large context window is useful until 73k tokens instead of 128k… at least for now. Either way, they’re big steps forward, but definitely need some polishing first.

But most people are more interested in something else right?

GPTs (Agents)

OpenAI have created a way to build an AI agent using no code and called them GPTs. I think they didn’t use “Agents” because they can’t trademark that word. Either way, it’s a really cool feature so let’s take a look at it.

Using their new “Assistants”, you can build an agent in just a few clicks. One of the most powerful tools here is the ability to upload files and give the “GPT” the ability to retrieve information using the “Retrieval” tool.

I mean, how cool would it be if I could simply upload some data - some pdfs, or docs - and then have GPT-4 know and understand everything in that data. That would be very powerful! Augmenting humans with AI is the future, so something like this would be a game changer. Unfortunately, once again, this doesn’t really work that well.

In these examples [link] [link], people have already found the retrieval to not work very well.

According to OpenAI though, they’ve achieved 98% accuracy on their RAG (Retrieval Augmented Generation). Will discuss this in future newsletters.

I’m not sure what they were cooking with this tbh. Perhaps for a certain dataset they were able to achieve 98% accuracy but this is an absurdly high number. I understand the need to hype, so take everything with a grain of salt.

Here’s the thing though.

This is still a great first step. OpenAI is showcasing to the general person what can be done with agents (GPTs) when you string together retrieval functionality with an LLM like GPT-4. But if you know anything about retrieval, you understand the complexity of the underlying software. There is no one size fits all system. Hell, even Claude’s file processing does a much better job extracting data from tables and images compared OpenAI’s GPT’s which is practically useless at this.

In the future, eventually, this will be solved and I expect OpenAI to make big strides here. But for now, this is definitely not something that can be used in production.

GPTs DISCLAIMER

For anyone reading this and thinking about building their own GPTs, either don’t make them public or don’t upload any private data to them. You can literally ask a GPT for the files it contains and it will just give them to you… Not very ideal if it’s being powered by proprietary data.

Oh, and the pricing is expensive.

The everything company

Much of what OpenAI announced is laying the foundation for a new future - one that is powered by AI at every turn. Not only through ChatGPT, but with an entire marketplace of “GPTs” that may eventually replace apps entirely. OpenAI is attempting to create an entirely new technological landscape.

The scary part? They intend to own every single part of it.

You can also now partner with OpenAI to have them train a custom model just for your company. There is an interesting question this creates - is OpenAI running out of data?

Oh, and it’s relatively cheap too…

This might be the first time I can actually see how Google dies out. If this paradigm shift works, and it will take time, Google will be at a serious disadvantage. OpenAI is not only building incredible tech - they’re building it bloody fast, have a very good team and actually build with developers in mind.

This may sound outlandish but I genuinely believe OpenAI, and in turn Microsoft, are attempting to own the future. The monopoly Microsoft has on AI already is insane. Thinking of what it will become is terrifying. With news on new NVIDIA compute coming out, I’m actually afraid for open-source.

Compute power is about to explode over the coming years.

Watch for the next premium newsletter on this.

What'd you think of this edition?

Login or Subscribe to participate in polls.

As always, Thanks for reading ❤️

Written by a human named Nofil

Reply

or to participate.