- No Longer a Nincompoop
- Posts
- Build an App in Seconds
Build an App in Seconds
Welcome to the No Longer a Nincompoop with Nofil newsletter.
Here's the tea 🍵
Replit’s Autonomous AI Agent
Amazon saving billions with AI coder
Klarna ditches SaaS altogether
Photorealism for free is now available
xAI is a frontier lab now
Personal Update
I gave a lecture at the University of Sydney!
I spoke about the current AI landscape, main players, how businesses should approach AI and a lot on how to build with AI.
If you who would like me to speak at your school/university/company workshop, feel free to email me about it here.
I’ll be writing a lot more on how to use AI to build software for the Avicenna newsletter (my consultancy).
This will be a lot more technical, talking about how I use LLMs with a business first approach.
If you’d be interested in reading about that, you can sign up here.
The first functional AI Agent
Cancel whatever plans you have after reading this newsletter and spend a few minutes playing around with Replit’s new AI Agent.
Seriously, I highly recommend watching this video showcasing how it works.
The first handful of people that respond to this email and give me an example of an app they want built, I’ll have Replit’s AI Agent try and build it for them and email them back the results.
I might even showcase some of them in the next newsletter.
Now, let me explain why this is something worth using.
There are many AI code editors out there. You have Cursor which is one of the most used and valuable code generating editors.
I could write an entire newsletter on it.
There are others like Supermaven and Codegen.
Replit itself also has its own AI that helps write and debug code. I wouldn’t say it’s as good as simply using Claude by itself though.
But you see, Replit has a kind of advantage that practically no one else has.
They have an advantage when it comes to actually publishing things on the internet.
What does that mean?
You can code an entire app on your laptop, but, getting the app on the internet where other people can use it is an entirely different and tedious task.
Replit fixes this.
With Replit, all you have to do is write the code.
You don’t need to worry about which version of Python you need, you don’t need to worry about packages, or dependencies, or anything really.
It takes like two clicks and your app will be usable on the internet by anyone.
This ability is what makes Replit so useful, and positions it in such a perfectly unique way in the current AI meta.
Combine AI coding with Replit’s deployment abilities and you now have the ability to code and publish applications, websites, whatever you like, to the internet in minutes.
Replit’s AI Agent is the first truly autonomous agent that can code, and also publish an app to the web as well.
It can implement Stripe payments, Google sign in SSO and has a bunch of other hidden talents.
It even suggests which frameworks to use and sometimes insists on using one over another!
Turns out it has its own internal APIs it can use as well. I don’t know why they don’t make this public knowledge so we know what we can work with.
This is the first time you can go from prompt to published app, and it’s really something to behold.
I built a restaurant finder that lets your filter based on type and ratings. It’s very basic considering it only took two prompts to build.
I didn’t write a single line of code.
I clicked two buttons to deploy it on the internet for you to use.
That’s the power.
You’re using an app I didn’t code and took no effort to host on the internet.
What do you think happens when someone uses 100 or 1000 prompts?
This is a video editing app built entirely by AI.
11,000 lines of code.
All it takes is a bit of perseverance.
Not just a gimmick
Many people in the software industry think AI programming is a gimmick.
“It’s nowhere near good enough for the work I’m doing”.
“It can’t code in the language I’m using”.
“Let me know when it can work across a dozen files”.
Amazon
Take Amazon Q for example.
Amazon Q is a new AI coding assistant Amazon is using to update their software.
According to Andy Jassy, CEO of Amazon, upgrading an application to a new version of Java (a programming language) typically takes 50 days.
Q has reduced this to a few hours.
In under six months, they’ve been able to upgrade more than 50% of their entire production systems.
Their engineers shipped 79% of the AI’s suggestions as well.
This means the AI would open a ticket for an issue and suggest an update, and, 79% of the time, the engineers would look at it, press a button, and the update was implemented.
Zero human involvement besides checking.
This has saved them over 4,500 developer years of work.
At ~$250k per developer year, that’s roughly $1.1 Billion in effective savings from using AI for grunt work.
Don’t listen to Goldman when they say the ROI on AI would be disappointing.
Anyone who says thinks this is either lying or ignorant.
In fairness, there is a critical question most companies don’t consider, which is half the work I do for companies.
Where is it worth implementing AI?
This is the million dollar question.
As well as writing this newsletter, I’m also the Co-Founder of Avicenna. An AI + Brand consultancy that enhances human experiences using AI. We build products for companies across industries.
Klarna
Klarna has stopped using both Salesforce and Workday, two big SaaS products to help manage its business.
I’ll be honest, I’m a bit skeptical regarding Klarna’s use of AI since they said their AI chat bot does the work of 700 people.
Regardless, I do think this is a sign of what’s to come.
Companies, if led correctly, can build their entire tech stack internally, exactly how they want it.
I’m currently working with two enterprise companies to do just this.
It has never been easier to build software.
Not for companies.
Not for people.
This also means the future will be filled with extremely personalised software experiences.
Have AI craft the user experience in real time as the user interacts with a product.
What is possible now was thought impossible six months ago.
What was possible six months ago was thought impossible a year before that.
What is considered impossible now will not be impossible in the near future.
Software is changing.
These images are not real
There’s a new image generator that’s really bloody good. Flux is open source, which means, if you’re bothered enough, you can use it as much as you want, for free.
She is not real
One particular thing that makes this model really good is that it can generate text in images practically perfectly.
Within the year, we’re going to have basically perfect image generators with text.
You can also train entire products using Flux. Given a set of images, it can then generate any image in that style.
Brain Flakes is a type of kids toy like Lego. Using Flux, you can generate any kind of image in the Brain Flakes style.
Like this chair.
Read more about this here.
People are already using AI chat bots to practise texting girls that might lose interest…
Just wait till they add Flux avatars to these chat bots.
Note: If an image is good enough that it’s impossible to tell if it’s real or not, there is no way to detect if it is AI generated. Not yet anyway.
More examples of people generating some pretty cool things with Flux [Link] [Link] [Link]
How to generate web page ad with Flux [Link]
Someone generated an image animated it using Flux+AI [Link]
People are also using it to generate game UIs. For some (🤷) reason, they’re NSFW [Link]
Flux can also generate hyper realistic CCTV images which is pretty bad for fabricating evidence [Link]
If you want to learn more about how to prompt with Flux as well as how different variations of the same word and the positioning of said words affects the image, check this thread.
You can even fine tune Flux with your own images [Link]
The simplest way to use Flux [Link]
Hiring influencers to make content for you might be a thing of the past when AI video gets better [Link]. Or perhaps it will be even more in-demand.
xAI is here
Elon Musk’s xAI is previewing their new AI model, Grok 2, and it’s good.
In the span of a year, these companies have caught up (publicly) to OpenAI’s GPT-4.
Anthropic with Claude 3.5 Sonnet
Google with Gemini 1.5
Meta with Llama 3.1 405B
xA with Grok 2
I actually think xAI can build some of the best models on the planet, particularly because of the insane amount of compute they have.
A funny thing they’ve done is integrate the new Flux AI image generator into Grok 2.
Their guidelines put "illegal activities" and "deepfakes or misleading media" off limits, but, naturally, people are making some deranged images.
This is not my image. Randomly taken from X.
It’s guard rails seem to be a bit loose. In fact, I’m not sure they even have guard rails on this thing.
Problem is, most people have no idea what Flux is, so, they’re thinking Grok is actually generating the images.
But there’s a bigger problem here.
Some people are freaking out over the fact that Grok will do anything you ask of it.
Disregarding image generations, it will answer basically any question you have.
Is this a bad thing?
If an AI answers any question you have, is that an aligned AI?
If an AI model starts giving me a lecture on what I can and can’t ask it to do, is that a misaligned AI?
Who is the AI to say what is moral or not?
Who get’s to decide what the moral code is?
Should AI models answer any question? |
The reality is, AI models are aligned to whatever their creators believe.
I think these image generations are going to cause an uproar now, but given some time, people will stop caring.
The text generations, however, is a different story.
Tomorrow’s premium newsletter will showcase a very interesting study on how AI can manipulate people’s beliefs, even strongly held conspiracy theories.
You can sign up here, or try a one month free trial here.
If you’d like to read more newsletters every week covering everything happening in AI, consider signing up to my premium newsletter. It’s $5/month and helps me sustain this newsletter 🙂.
How was this edition? |
Thanks for reading ❤️
Written by a human named Nofil
Reply