- No Longer a Nincompoop
- Posts
- Europe Has A Serious Problem
Europe Has A Serious Problem
Welcome to the No Longer a Nincompoop with Nofil newsletter.
Here’s the tea 🍵
Lawmakers realising they screwed the EU
OpenAI
losing critical team members
can detect AI text
AI wins silver at the Math Olympiad
What an AI model actually looks like
Europe has a problem.
You see, the EU loves to regulate, and that’s fine. That’s how they like to do things and that’s how they’ve been running things.
When it comes to AI, the EU has been rushing the last few years to be the first major power to put regulation in place.
They succeeded.
The EU AI Act was put in to place in March.
It’s not fully enforceable just yet, but it will be soon.
Quick Recap
The EU AI Act places AI models in different tiers based on the risks they can cause and the size of the model. For example, the latest open source model from Meta is classified as having systemic risks and wouldn’t release in the EU if the act was being enforced a few weeks ago.
To anyone in AI and following the advancements, it was quite clear that the act would do nothing but hinder any innovation coming out of the EU and, in turn, any investments into EU companies.
I’m struggling to comprehend how the lead author and architect of the AI Act, Gabrielle Mazzini, is only now realising that perhaps the act can be restrictive.
Do these people think even six months ahead?
How is such incompetency allowed to enforce regulation around the most powerful technology in the world in one of the major regions of the world?
I just can’t even fathom how such negligence and ignorance could drive such an important legal decision.
But I shouldn’t even be surprised.
Mazzini, a “self-confessed technocrat” (you got that right), has basically zero startup experience. His experience limits his ability to understand how a startup even operates.
You may think this is an overreaction.
Why be so harsh?
I haven’t even been there… (yet).
But the situation is dire.
Germany’s economy is dwindling into unimportance.
The EU only received 16% of all VC capital since 2022.
Even if you take out AI and look at it generally, the EU only received 18%.
These numbers are abysmal, and this is before the Act is being enforced!
The reality is, big tech has stopped caring.
Pre-AI, companies would do everything in their power to make sure their products were in the EU market.
Now?
They’re calling the EU’s bluff.
Meta has already stated that they won’t release any of their upcoming AI models in the EU. These are open-source models being released for free. Not being able to use these would be a massive blow to businesses there.
You can basically write off any AI innovation to come out of the EU.
The essence of American capitalism is to allow the rapid emergence of new companies, the essence of European capitalism is to do everything so that old companies do not die.
Is big tech playing fair?
Absolutely not.
But Europe has no one to blame but themselves.
If big tech is calling the EU’s bluffs, they’ll call Nigeria’s too. The country fined Meta $220M for violating consumer laws.
This sounds ridiculous now, but I won’t be surprised to see a more fragmented internet as we move into a new digital age.
This may very well be the last time most of the human populace shares the same internet.
A fragmented future awaits.
Is the EU in trouble? |
OpenAI has lost three leaders in the last week. John Schulman, one of the co-founders of OpenAI and led post-training, has left to join rival AI lab Anthropic. This comes not long after Jan Leike, OpenAI’s previous Head of Alignment, also left to join Anthropic.
Peter Deng, a product leader from Meta, Uber and Airtable, has also left after joining just last year.
Finally, Greg Brockman, President and co-founder of OpenAI has taken an “extended leave of absense”.
It’s a strange time for OpenAI.
They currently don’t have the best AI models and they’ve been losing core members to rival AI lab Anthropic.
You might be wondering, why would someone leave the hottest startup on Earth to join a rival?
Well, Anthropic not only has the best AI model on the planet right now, they’re also founded by ex-OpenAI employees.
In fact, almost half the team from the original GPT-3 paper from OpenAI left to create/join Anthropic.
But, OpenAI is in a precarious position in more ways than one.
I’ve mentioned previously that there is no way to detect AI written text; it’s impossible. All the tools out there that claim to detect AI text are lying and don’t work.
It turns out, OAI themselves have developed a tool that can detect if some text was written using ChatGPT, and it actually works.
But they can’t just release it.
You see, a lot of people use ChatGPT for school or work. If OAI released this tool, their users would use a different AI tool, which would affect their bottom line.
So it’s a case of ethics vs money.
Internally, employees want the company to release the product, but, I don’t see why they would do such a thing. They’d be shooting themselves in the foot and leading their own users away to other AI providers.
Should OpenAI Release their Text Detection Tool? |
AI has achieved a silver at the IMO Math Olympiad. The team from Google DeepMind has created and combined two separate models, AlphaCreate and AlphaGeometry, to solve complex mathematical problems. DeepMind CEO, Demis Hasabis, has stated that they will add this functionality to their Gemini models, giving them the ability to solve complex math.
This is an incredible achievement and a massive step forward in LLM capabilities… or so I thought. There is a tiny usage of LLMs in this case. Overall, what most people have come to know as AI, is not really used here.
This isn’t a case of a ChatGPT-like model getting a silver at the IMO. Most of the work was done by traditional models using mathematical techniques which I won’t even pretend to understand.
If you want to learn more about how exactly they did this, check out their blog post.
What AI really looks like
If you’ve used any AI model, most likely it’s been an instruct model.
When building AI models, after the model is done training, scientists will use a number of different techniques to make the model better at listening to prompts by the user.
So what does a model look like when it hasn’t been trained to follow instructions? (This is called a Base model).
Let’s take a look.
Here is an example of an instruct model, the normal one most people use.
Instruct Model AI
And here is the base model, before instruction tuning.
Base Model AI
Yep, they’re absolutely looney.
They can say incredibly unhinged things. In this image, I’ve restricted it to 100 tokens, otherwise, the model keeps yapping about nonsensical stuff.
If you’d like to play around with the base model, you can try it here.
If you’d like to try out the best open-source AI model on the planet, the instruct version, you can try it here.
Moving forward, we’re going with shorter and more frequent newsletters based on the last poll.
Writing these newsletters takes heaps of time and research, and the premium newsletter is how I justify sustaining writing newsletters.
If you want to receive more newsletters, you can sign up to premium here.
Also, if you have a problem with something I’ve said, please leave a comment so that I can reply and we can have a discussion. I appreciate hearing everyone’s varied views and why some people think I’m an idiot :).
How was this edition? |
As always, Thanks for Reading ❤️
Written by a human named Nofil
Reply