- No Longer a Nincompoop
- Posts
- AI has done the impossible
AI has done the impossible
You're thinking about AI the wrong way. Big Tech, Creativity and Terminators - just another week in AI
Welcome to edition #8 of the “No Longer a Nincompoop with Nofil” newsletter. I’ve had a blast writing this one, mainly because of the last section. You don’t want to miss it.
Here’s the tea ☕
AI has done the impossible 🫢
Creativity doesn’t mean what you think 🫧
Is this how we build Terminators 👾
Big doesn’t mean slow
Something strange happened last week. Besides it being the craziest week in AI history (something I’ve been saying quite often these days), a few announcements took me by surprise. You see, there’s this company called Adobe that you’ve probably heard of. They’re this behemoth of a company that don’t do all that much these days besides acquire smaller companies like Figma. Small here being relative as Figma cost them only $20 billion. So what did Adobe do that was so surprising? They actually shipped a product that’s relatively new and exciting! Adobe announced their Firefly program which includes AI image creation, video editing, illustrations, graphic design, 3D modelling and more. You know what? It actually looks so good. AI has made big tech companies move fast and ship products. This is truly a momentous occasion. (Not you Google, you clowns. While the world was announcing cool and new tech, you guys announced… a waitlist 🤦♂️).
Change the scene & go from image to video!?
How creative can you be?
If you’ve been reading online discourse surrounding AI you might’ve seen a lot of people being very gloomy and having a ‘doomer’ mentality. Why bother learning anything when AI will be better at it than us anyway? Why should I learn to code when AI will code with far fewer mistakes? Why learn to design when AI can create any type of design for anything, having seen every type of design there is to see? I can understand the thought process, I really can. But there’s a different way of looking at it, at least for now.
Should you learn to code? Or design websites or graphics? Frankly speaking, I don’t know, I don’t have the answers. But think of it this way. Hypothetically, if you could code, what would you create? If you could design, what would your designs look like? The technical barrier of entry to creativity is being destroyed before our very eyes. Can we do anything to stop it? Not a chance. So there’s no point in being sad about it. Instead, ask yourself, how creative can I be? What would I create if I could create anything? Creativity is no longer locked behind years of mastery or training of technical tools. Soon, the only thing separating you from someone else who’s more creative than you is one thing - imagination. So, how far does your imagination stretch? What are the bounds of your originality? What does it look like when something is created into this world by you?
After saying all that I feel like I still have to mention, is this a good thing? Is it a good thing that I, someone who is so unartistic I can’t even draw a straight line with a ruler, can “create” artworks that would otherwise take years to produce? I don’t know. It does feel strange to enter a prompt for what I’m imagining and then have it just pop up in front of me, like it’s too easy. For better or worse, that’s just how it will be from now on.
Terminators & Muffins
Terminator 1 & 2 are some of my favourite movies ever. They really encapsulate the horror of a human killing machine. Everything in those movies is just great. So while I was looking at this incredibly interesting research paper that discusses analysing images I couldn’t help but be reminded of Terminator. I know what you’re thinking, what on earth is this guy talking about? Hear me out a second. Take a look at these two gifs and meet me down there afterwards. Trust me.
ViperGPT - complex Q&A on images
T-800 completing a task through analysis
So in the first gif we analyse the image and evaluate how man muffins there are and how we could divide them between the two kids. We first have to analyse how many muffins there are, then how many kids there are and then we can finally decide how many muffins each kid gets. This can be done in video form as well.
If you pay close attention to this opening scene in Terminator 2, the T-800 (Arnold) gives itself a task - “Acquire Transport”. You can see it on the right hand side right at the beginning of the clip. He even lists out certain parameters he wants the desired vehicle to have underneath. He then looks at every single motorbike and car and instantly analyses what model it is and what its specifications are. After analysing each one he decides which vehicle best matches what he’s looking for.
When you boil it down, this is exactly what’s happening in the first gif as well, and this is exactly how we will give vision to humanoid robots in the near future. Obviously they won’t be human killing machines (hopefully 🤞) but Terminator 2, way back in ‘91 showcased a pretty realistic depiction of how a robot “sees” and we’re finally at a point where we’re actually building this technology. At the end of the scene he then gives himself a new task - “Acquisition of Suitable Clothing”. You know what happens next.
AI got me feeling like
I curated a reddit post with over 50 links to some of the crazy things people have been building over the past week. A lot of you might have joined this newsletter after seeing one of my posts on reddit, welcome and thanks for joining 😊. If you’d like to check out the post, you can access it here.
As always, Thanks for reading ❤️
Written by a human named Nofil
Reply