- No Longer a Nincompoop
- Remote Robot Workers Are the Future
Remote Robot Workers Are the Future
Google's AI Physician is Better Than Humans + OpenAI Working in Military + Altman Looking To Build New Chip Empire + OpenAI Responds to NYT Lawsuit + Short Digest
Welcome to edition #32 of the “No Longer a Nincompoop with Nofil” newsletter.
Here's the tea ☕️
Last week I wrote about the new Rabbit R1 AI device, the device that makes your car self driving, how people are slicing open source models together to make better models, the launch of the GPT Store + more. You can read about all of this and a lot more by subscribing to my premium newsletter.
p.s If you’re a business owner or want to know how AI can help your company, I’ve been consulting and helping companies build AI products through my consultancy. Feel free to reply to this email if you’d like to have a chat :)
Robots, Robots & Robots
A new robot demonstration has been released from the team at ALOHA. Mobile ALOHA is a dexterous open source robot that can perform a number of tasks like cooking, cleaning, hanging clothes, calling elevators, loading a dishwasher, doing the laundry and even putting itself on charge! There are a few things to note though.
The video is obviously sped up so it’s not exactly ready for real life usage.
More importantly, the Aloha robot is not fully autonomous. The actions it’s taking in this video are being done remotely by an actual person. Someone is remotely controlling this robot. So imagine a future where you wake up, put on a VR headset, control a robot in an office building and clean the building from your living room. That’s what this is. Although, I don’t think that will last very long if it even happens. We’re designing robots to be fully autonomous. Once we figure that out, it’s going to be iRobot irl.
Mind you, this robot only cost $32k. Imagine what the big companies are up to.
AI will diagnose you soon
Google created an LLM based AI system designed for diagnostic reasoning and conversation called the Articulate Medical Intelligence Explorer (AMIE). They tested the model against 20 real primary care physicians. It’s a small sample size but the results speak for themselves.
The AI by itself did better than AI-assisted physicians.
“AMIE had greater diagnostic accuracy and superior performance for 28 of 32 axes from the perspective of specialist physicians, and 24 of 26 axes from the perspective of patient actors.”
Would you rather be diagnosed by a human doctor or AI?
Last week, OpenAI quietly removed any language talking about not using their AI for “Military and Warfare”. Not even a few days later and its announced they’ve partnered with the Pentagon & DARPA to work on cybersecurity tools.
I’m not too surprised by this considering Microsoft has worked extensively with the US military, and since OpenAI needs a lot of money, they’ll do whatever they need to please MS. OpenAI is probably the reason Microsoft briefly overtook Apple as the most valuable company in the world.
OpenAI affirms that they won’t be using their technology to develop weapons or cause harm. We’ll see how long that lasts.
At the WEF, Sam Altman spoke about AGI. You can watch a clip of it here. He expects AGI to change the world much less than we all think. Now, there are two ways we can see this.
Since Sam here is referring to “Artificial General Intelligence” and not “Artificial Super Intelligence”, he could be right and honest in his rhetoric. That is, he truly believes AGI won’t change the world that much. Again, since he’s not referring to ASI, I can see where he’s coming from.
In saying that, after hearing this I was reminded of a quote. Peter Thiel once talked about how Monopolies love to downplay their dominance and control while startups love to spin up this narrative that they’re changing the game and are a lot bigger than they really are. I would not be surprised if this was the case here. Altman knows how powerful GPT-5 is; he’s probably already seen something of the sort. If he is constantly in the media talking about job losses and disruption, it will scare people.
Will AGI change the world significantly? Probably. I think the more important thing here is that people think it will happen instantly. That’s unlikely. Although, the rate that we are going now, I wouldn’t write it off either.
Altman also appeared on a podcast with Bill Gates where they spoke about AI advancements, safety and ethics. Altman mentioned that they expect future models to be significantly better in reasoning, reliability, adaptation and personalisation. He also talks about multi-modal models including video as a form of input and expects systems that are 100,000 or 1M times the compute power of GPT-4.
You can listen to the podcast here.
Chips are the new currency
Altman is supposedly in talks with investors to raise for a new chip project codenamed Tigris. Just a few weeks before Altman was ousted and then reinstated, he was in the Middle East speaking to investors about building manufacturing plants all across the globe to build semiconductors to rival NVIDIA. This is very interesting considering Microsoft also has their own in-house chip project called Athena. Companies are realising they’re going to need a lot (think 1Mx) more compute to build the next iterations of AI models. It would be interesting to see how OpenAI and Microsoft deal with having their own chips considering they’re in partnership.
NVIDIA practically owns the market at the moment and have the best GPUs by far. AMD is doing its best to play catchup, but they have a long way to go. Facebook is building their own data centres and chips. Google already has their own. Apple, only in the last 5 years got rid of their dependency on Intel and built their own chips. MS and OpenAI are trying to get rid of their dependency on NVIDIA as well.
I didn’t think Elon was right when he said we’re going to have a chip shortage in the next few years. With how much compute all of these companies need, and with really only one main supplier, I think he’s right.
NYT Lawsuit Update
I previously wrote about the lawsuit the NYT has filed against OpenAI and Microsoft. OpenAI responded with a blog post calling out the NYT on the more slippery stuff they mentioned in the lawsuit like regurgitation of their articles. They confirm they were in talks to form a partnership and only read about the lawsuit in the NYT when it was filed (lol if true). They also mention the work they’ve done to partner with media companies, how training on copyright data is fair use and that they’re still willing to work with the NYT to resolve their differences. You can read breakdown of the blog post here and here. You can read the blog post here.
It seems like the US Congress is siding the media industry and want tech companies to pay for training their models on proprietary data. If this goes to court, I expect it to drag on for a significant amount of time. I also believe that there are other reasons for this. Let me explain.
We are at a point in time where AI is eating at everything. If you can’t see it just yet, you will soon enough. There is a struggle happening right now - between people who are siding with humans and those who are siding with technology. There is a lot of noise right now about the disruption and catastrophe AI will bring and make no mistake, people are spooked. Those with power are especially worried and have been very vocal about their concerns. I think this is why Altman is constantly downplaying the impact of AGI even while he constantly talks about the inevitable change it will bring.
At the end of the day, law makers are humans too. They see the noise and are just as worried. To many people, AI is this alien thing we don’t understand that is almost a threat to humans as a species. I won’t be surprised if congress, politicians, the courts - everyone, sides with the humans; in this case, the media companies. If this goes to court, it will be an uphill battle for OpenAI, one they will almost certainly climb out of. Why? Money > everything else.
Do you think NYT will win the lawsuit?
New Plan & Privacy Issues
OpenAI announced a new Team plan where you get a higher message cap for GPT-4, DALLE (why would you even use this) and Code Interpreter.
The issue some people are having is you automatically don’t have your data being used in training with this plan. It seems like people weren’t aware that your chats with ChatGPT are being used to train their models.
If you get rid of this on the free tier, you’ll lose chat history. Excellent product design.
The only way to not include your data in training is by going to the privacy site and filing a ticket. Obviously this is not common knowledge and is hidden in the depths of their ToS and FAQ. The only reason this process even exists is to comply with some laws.
The Governor of Pennsylvania has announced a partnership with OpenAI to use AI alongside employees [Link]
OpenAI released the prompt logic behind their GPT Builder flow. If you’re building custom GPTs, definitely check it out. But also be careful about giving any private data to a GPT, they can leak it quite easily. [Link]
OpenAI announced its first partnership with a university [Link]
It took about a week until the GPT Store was filled with “AI Girlfriend” bots. Even though this goes against OAI’s ToS prohibiting the use of romantic relationships, its what people want. There’s a reason Character.ai is such a massive website. People are lonely and they’re using AI to help them, for better or worse.
Draw your product into existence
I’ve been meaning to write about tldraw for a while and I’ve finally gotten around to it.
With a blank canvas, you literally draw and describe what you want to happen. That’s it. Functional UI’s in seconds.
Take a look at some examples.
Someone created a working workout timer with ability to track exercise and weights. You can check it out here.
I even played around with it myself and built a simple timer in about 10 seconds.
LeftoverLocals is a vulnerability that has been found in Apple, Qualcomm, AMD, and Imagination GPUs. The vulnerability allows LLM responses to be leaked through local GPU memory. This is a big deal. I’m sure NVIDIA is laughing right now. Tweet [Link] Article [Link].
Two GoogleDeepMind scientists are planning to leave and create their own startup called Holistic. They’re already in talks to raise over £200 Million which is almost double what Mistral got when they started. One of the founders, Laurent Sifre, was a co-author on the famous 2016 DeepMind research paper on Go. I still remember how crazy it seemed that a machine could beat a human and how fascinated everyone was back then [Link]. Oh and they’re basing in Paris. France is absolutely killing it with their AI labs. A huge win for them.
PwC polled over 4000 CEOs and 1 in 4 said they’re planning to replace workers with AI this year. 45% believe they won’t last the decade if they don’t change. They think 40% of time spent on routine tasks like emails, meetings etc, are inefficient, with 60% of CEOs thinking AI can help solve this issue [Link]. Given how many layoffs have already occurred, I’m afraid things won’t be getting better this year.
Surya is a multilingual text line detection model for documents. This is the best text detection I’ve seen, it even gets different columns and headings. Great work here, might be better than Tesseract [Link]. Thread [Link]
This paper compares the reasoning ability of different AI models like Gemini Pro and GPT-4. It also showcases what kind of reasoning tasks are actually used in these kinds of tests. GPT-4 is far beyond all other models, although I would love to see how Mixtral would do in these tests. Tweet [Link] Paper [Link]
Want to see ChatGPT break? Paste this image into it [Link]. Mine broke as well. No idea why this is happening.
Sama talking about integrating articles from different media outlets into ChatGPT. SEO might soon be dead [Link].
If you’d like to read 2000 more words on all the crazy things happening in AI space, sign up for my premium newsletter!
How was this edition?
As always, Thanks for Reading ❤️
Written by a human named Nofil