Big Brother is Coming

Mind Reading, War & Anime. Just another day in the world of AI

Welcome to edition #4 of the “No Longer a Nincompoop with Nofil” newsletter. Over 200 AI powered tools have been released in the last few days alone but I haven’t been including them here - if you’d like to see more of those, just hit reply and say “YES”. I’d like to include as much usefulness as I can 🙂.

Here’s the tea ☕

  • Careful what you think 🫢

  • AI and War are a scary combination 😨

  • Creativity is evolving 📽️

Orwell was a visionary

A few days ago I said your voice is not your own anymore. Well turns out soon enough neither will your thoughts. You see something quite fascinating happened. Researchers from Osaka ran an experiment where they were able to reconstruct a visual image from an fMRI scan using Stable Diffusion. Lets take a step back and take in what you’ve just read in simple steps.

  • Researchers showed someone a bunch of images

  • They then ran an fMRI on the persons brain to collect their brain data when viewing the images

  • The data from the scan was then fed into Stable Diffusion which then reconstructed the images they saw

  • Stable Diffusion also provided text descriptions of the images it was producing

  • None of the images were fed to the model. It had never seen them before

“We show that our method can reconstruct high-resolution images with high semantic fidelity from human brain activity…. it only requires simple linear mappings from an fMRI”

The AI takes the noisy image from the brain scan and turns it into a picture.

This is borderline mind reading. This research isn’t particularly new either. Researchers have been trying to construct images from neurological data for decades, but generative AI just unlocked a gargantuan step forward in fidelity. By the way, it works on faces too! Obviously there are severe limitations to conducting an experiment like this (MRI machines are gigantic swirling metal cylinders), but the simple implications of this technology existing are insane. Running this while someone sleeps and then reconstructing their dreams is a very realistic use case. Use cases in crime, understanding psychological issues, treating trauma - the simple idea that we can read and visualise a persons thoughts creates an entirely new avenue for dealing with anything and everything human related. Maybe Spielberg was onto something when he filmed Minority Report..

Left column was shown to a human. S1 & S2 were reproduced by AI

I’d like to say we probably won’t see an MRI machine being normal sized in our lifetime, that we could record our dreams at a whim. But with all the technological advancements currently taking place and the speed at which they’re occurring, I’d like to adopt a new mentality of “never say never”. There’s an anime called Psycho Pass where criminals are identified by simply scanning their thoughts with a camera and converting their mental state to a quantifiable and measurable value. Now I don’t think we will ever live to see the possibility of such a society, but I wouldn’t be surprised if it were to happen a few generations down the line. I guess sometimes missing out isn’t so bad.

Software is King, Even in War

Some people have been scratching their heads trying to understand why so much money is being poured into AI. Let me show you why.

Two years ago DARPA, a US research agency alongside the US Air Force ran simulated dog fights (aerial fights between two jets) between different AI models. The best performing model then had a simulated dog fight with a human fighter pilot. I’m sure you already know where this is going. The AI beat the human pilot 5-0, very convincingly as well.

So why am I talking about something that happened two years ago? Well DARPA just released footage of a new special aircraft called the X-62 Vista. It is completely controlled by AI - no human pilot needed. The plane flew for 17 hours undertaking a dozen tests like conducting “advanced fighter manoeuvres” and dog fighting techniques. The jet itself is controlled by four different AI models, each with their own use cases and capabilities to handle the aircraft. An airforce scientist working on the program said the jet is certainly capable of undertaking complex missions. Don’t forget, since this is an experimental model there are seats for human pilots. Soon enough jets will be built without seats for humans, being able to reach higher speeds, disregarding g forces and manoeuvre in ways humans could never manage.

AI flying machinery like planes and helicopters isn’t new either. DARPA has ran several tests with AI flying helicopters without even a pilot on board. But an AI controlling a fighter jet and being able to read mission data and circumstances in real time, come to decisions and execute commands is a whole different ball game. When the decision making process goes from “can we risk lives to do this” to “lets just get a squadron of AI jets to do it” the way wars are fought suddenly becomes very different. The line surrounding what can and can’t be done starts to get real blurry.

Build your own anime in seconds

Ever wanted to star in your own anime? The team at Corridor have essentially built the worlds first video-to-anime workflow by using a green screen, Unreal engine to create the backgrounds and Stable Diffusion to transform the video to anime.

For a v1, I think it manages to capture facial emotions shockingly well. Its been a few months and we have entire anime being built by a 3 person crew. There is no doubt that someone will come around and streamline this process. It’ll look something like this:

  1. Choose a style of movie/show/anime you like

  2. Upload your images/videos

  3. Generate a story

  4. Build an entire show

This type of generative media is coming, regardless of the ethical concerns surrounding the use of AI image generators like Stable Diffusion. These models have been trained on billions of images on the internet without consent from artists and creatives. When I ask an AI to make me a movie that looks like Scorsese’s work, will he benefit from it? The problem is regulation is just too slow for how fast AI is moving. A week in AI is basically 6 months in the real world, and it’s not slowing down anytime soon. Generative media will certainly kill some creators, but it will also create an entirely new type of creative as well. Open source AI is shattering the barrier for high quality production, allowing absolutely anyone to create content that can rival entire studios.

Is this a good thing? I’m not sure. I don’t consider myself qualified enough to even have an educated thought on the topic. I don’t know what its like to create a show, a character, and bring them to life. I don’t and can’t even understand the thought process that goes into building entire worlds with characters that feel so real and relatable. So why don’t we take a look at what one of the most accomplished and acclaimed creatives ever, Hayao Miyazaki, has to say about AI and its ability to “create” art and its incorporation in his own creations.


As always, Thanks for reading ❤️

Written by a human named Nofil

Join the conversation

or to participate.