AI Might be Dead Before it Even Takes Off

Regulation in the wrong direction + Why size will matter + The seriousness of the situation + A look at history

Welcome to edition #27 of the “No Longer a Nincompoop with Nofil” newsletter.

Here’s the tea ☕

  • Regulation might derail the AI revolution 💀 

Big tech couldn’t help themselves

Congratulations Sam, you did it. All that lobbying, all that flying around the world and talking about how dangerous AI is and how it might end all life on earth has finally led to something. The US government released their first executive order on AI. It’s (kind of) ridiculous, it might as well have been written by Google, Microsoft, OpenAI and Anthropic (probably was). Let me break it down for you.

The executive order on AI is a 111 page document of extremely broad strokes language. It is rife with generalities where everything will be in the hand of the government to determine and ascertain what falls under its jurisdiction.

Language

The generalities in the document are concerning, especially if this is carried forward into actual bills.

“Any foundation model that poses a serious risk to national security”

What about models that pose a risk but not a serious risk? Are those okay? What constitutes risk? Government agencies are going to create tests for new models to pass… Do these people even know what an LLM is? Of course not. They’ll be consulting the experts on the matter (big tech), which is great, except for the fact that they have their own ulterior motives.

The order sets a reporting requirement for any foundation model trained with:

  • ~28M H100 hours, or roughly speaking $50M USD

  • Any compute cluster with 1020 FLOPS which is roughly >50k GPUs

Restricting model compute and model size is like regulating based on the number of lines of code… it’s ridiculous. Here’s the funny part - the order specifically states “any compute power greater than 1026  integer or floating-point operations”. How do you skirt around this? Just define a new number format of course 🤷‍♂️.

At the moment there are a fair few companies with this sort of compute power. So why is this a big deal? Over the next few years, this level of compute will be minuscule in comparison. There will be thousands of companies with far more resources.

Today’s supercomputers will be like tomorrow’s iPhones or smartwatches

So what’s this doing? It stops new players from entering the market. Big tech already have massive amounts of compute, and now they’re trying to restrict others from doing the same. Get ahead and yank the ladder so no one else can reach you; bravo Altman, bravo 👏.

Will size matter?

But why limit based on those numbers, why that exact size? Well, we don’t actually know, but there are a few theories going around.

Firstly, that’s a big size. For comparison, the best open-source model right now is Meta’s Llama 2. Does it fall under this restriction? Nope, it’s not even on the radar…

  • Llama 2 was trained on 1.7M GPU hours using an A100

  • The H100 is a superior GPU than the A100

So the best open-source model is very far from the benchmark… for now.

The issue is this supposedly arbitrary size. Why 1026 floating point operations? Well, if you do the math, it's roughly speaking 1.7 Trillion parameters. You know what’s rumoured to be around that size by multiple sources? GPT-4…

Unserious situation

You might be thinking, well, at least they’re trying right? Well, actually, the whole situation is a complete joke. Do you know why Biden even issued the executive order? Apparently it was because he watched the new Mission Impossible movie….

I have no words.

If Biden needed to watch Mission Impossible to find things to worry about, I’m not sure he’s capable of leading AI regulation. He might even start writing random doomer fanfics like Yudkowsky. But let’s be real, the biggest surprise here is Biden actually staying awake during the entire movie.

Where do you think the government got those FLOP sizes from anyway? Why restrict at those sizes? Well, turns out there was an old reddit post that spoke about regulating using compute size and literally used the exact same rhetoric, language and means of compute. The government might have very well taken a random reddit post and used that as a source of calculating compute size for regulating the most important technological revolution in history.

But wait, there’s more!

Let’s take a look at history shall we. Back in 2019 OpenAI wanted to release their new GPT-2 model, but was afraid of releasing it at once… That model had 1.5B parameters. Did anyone even know about GPT-2 when it was released?

We are now at GPT-4 and beyond. Where was the harm? Most people don’t even know what GPT-2 is; they’ve only just heard about ChatGPT.

Just behind today’s horizon it’ll become really dangerous, you’ll see!

Big Tech

This mindset isn’t something new. When the Playstation 2 was released way back in 2000, there was a control on its export because it was believed the console was “sufficiently powerful to control missiles equipped with terrain reading navigation systems”. The Playstation 2… It had an 8 MB memory card! SD cards the size of your nail have over 100GB of memory these days. In hindsight we can see just how ridiculous this looks; I’m genuinely laughing while writing this.

The same goes for what's happening right now. We’re going to look back on these regulations and laugh at how silly they are. There will come a time when the power of your mobile phone will surpass the supercomputers of today.

A pause

I want to make one thing quite clear though.

I’m not against regulation of AI. I’m against dumb laws and regulation which will stifle innovation and hurt the open source community, which for all intents and purposes, is the average persons best hope at democratising AI.

Regulating AI at the application layer is a much more common sense approach. How it’s being used is what we care about. How AI is applied in healthcare, education, military etc should 100% be monitored and safety systems should be put in place. After all, the bias inherent within AI systems is a mirror of our own. We have a long way to go.

A note from me

I know, it’s been a while. Things have been hectic, not only work, but personally as well. I’ll announce some amazing news quite soon 🙂.

On the work side, I’ve spent quite a bit of time recently building AI chatbots in the education space, building with RAG & other techniques as well as hiring an AI engineer for TxM. Life is full on atm but I’m glad to be back to writing. If you are a premium sub, you won’t be charged for this month as I didn’t write for 3 weeks.

OpenAI dev day is on November 6th. See you soon.

What'd you think of this edition?

Login or Subscribe to participate in polls.

As always, Thanks for reading ❤️

Written by a human named Nofil

Join the conversation

or to participate.