A World of Wearables Lies Ahead

Meta, Humane, Tab & Rewind + Privacy Nightmares + The Law Sides With AI + Google Has Forgotten How to Ship

Welcome to edition #26 of the “No Longer a Nincompoop with Nofil” newsletter.

This one is long, but well worth the read. Trust me.

p.s I’ve also launched Time x Money, a consulting firm specialising in transforming businesses with AI. If you want to learn how AI can transform your business and even develop tools to do so, feel free to reply to this email or email me at [email protected].

Here’s the tea ☕

  • Meta opened the flood gates 💨

  • The “law” picks a side 🎭

  • Man, Google… 🤦‍♂️

Okay, I must apologise first. Like everyone else, I made fun of Zuck when Meta showcased their Metaverse. At that point in time, I don’t think anyone could be blamed for laughing.

But credit where credit is due. They’ve turned this franchise around. It seems like when it comes to AI and recent work, Meta has been making every right decision. Now with their most recent announcements, they’re setting the foundation for the future.

A look into the future

In the last newsletter I mentioned that it’s possible OpenAI is going down the hardware route. Well, Meta just beat them, and everyone else, to it.

In partnership with Ray-Ban, Meta’s releasing these new sunglasses that can see what you see and hear what you hear. Of course, Meta isn’t advertising it as this. At the moment, the main advertising line is that you can livestream from your glasses straight to Instagram or Facebook. Next year, you’ll be able to ask Meta AI, the AI underneath the hood, questions about things you see.

The truth is, this is all simply a pre-requisite for when we have 24/7 AI assistants that see everything we see and hear everything we hear. This is just the beginning.

Here’s what Meta also announced:

  • Meta AI - an AI assistant available in Whatsapp, Instagram and Messenger. Don’t even get me started on privacy concerns here.

  • Create AI generated stickers using Llama 2

  • Edit your photos on social media using AI. As if it couldn’t get any faker, now you can add AI to all your images. Using techniques like Meta’s own SAM, this will get to a point where it won’t be obvious what is AI and what isn’t. The reality is, most people even now, have no idea how good AI images really are; they can be easily fooled. According to Meta, images edited with AI will be marked as such so we’ll see how that goes.

  • They’ve also essentially created a competitor to character.ai. They got famous celebrities like Tom Brady and Kendall Jenner to act as AI avatars. I genuinely wonder how much they paid to have these big name celebrities on their platform.

Finally, Meta also announced their AI studio that will allow anyone to build AI assistants. Business owners will be able to build customer service bots into their Whatsapp chats. Creators will be able to build their AI clones. Simply put, they’re making it really easy to build and interact with AI.

I know what you might be thinking, it’s cool and all but there’s still a level of skepticism. I understand. I feel the same way. But there’s one thing that I need to talk about.

“Creators will be able to build their AI clones”.

This sentence is important. Alongside the AI avatars of famous celebrities, just about anyone will be able to build an AI version of themselves and sell it online. Why is this important? Well, and I can’t believe I’m saying this, but, because of the Metaverse.

The Metaverse? Yep. The same Metaverse we all, myself included, clowned Zuck for last year. Remember the Paris meme? Just take a look at the future of the photorealistic Metaverse.

I actually wrote about this back in March. Meta has had this tech for a while and it’s only going to get better. Just imagine when people build these avatars and sell them OnlyFans style. It’s only a matter of time.

Update/s

I initially wrote this newsletter on Sunday. It is now Tuesday evening where I am. For most readers it will be Tuesday day time (in the US). Over the last day, we got a look at Humane’s “pin” and two new AI wearables were announced. Tab and Rewind Pendant.

My quick thoughts:

I don’t think Rewind’s Pendant will take off. This is a device that is made to record every single thing you, and everyone around you says. Do we even want this type of recall? I don’t think so. It’s so… dystopian. Even for today’s digital age, being able to recall every word of every conversation just sounds wrong. Will this change? Perhaps. But there is no doubt that this is a privacy nightmare. (California doesn’t even allow recording conversations without consent)

Tab on the other hand doesn’t transcribe everything you say, it stores entities and facts as compared to entire conversations and doesn’t create transcripts.

Rewind, a company that is backed by a16z, also made their announcement right after Tab. Why? Because the reception was really good. Tab sold their initial 100 pre-orders in 6 hours. The cost? $600.

With these announcements, as well as the very likely scenario that OpenAI builds their own wearable, it seems like wearables really are the next frontier of AI integration. In my opinion, I would be surprised if any new AI wearable company lasts long term.

Why?

Why couldn’t Apple or Samsung do the exact same thing with their phones, which are already owned by basically every person on the planet.

At the end of the day, no matter how great your idea or product, no matter how much momentum you have, even if you’re OpenAI, there will always be one thing that matters more than anything for the success of a company.

Distribution.

Also, Humane… I wouldn’t exactly call this a “Pin”. The fact that former high-level Apple employees think this is something ordinary people will put on their clothes is hilarious. Detached from reality imo. No wonder they unveiled it at a fashion show…

The law need not apply

One question everyone has when it comes to AI is it’s legality. How legal are AI images? Code? Music? Do they fall under copyright? Well, we’re getting our first glimpse of what the future will look like.

Microsoft has come out and said that they will defend anyone who is sued for copyright infringement for using Microsoft’s Copilots or the output they generate. This is a big deal. Why risk using any other platform if Microsoft can assure you safety.

Here’s the funny part.

Microsoft hasn’t actually changed the TOS. Look at it right there.

Source (under AI Services)

I’m not sure what’s going on here or what the legalities are for saying one thing and having another in the TOS. However, Microsoft isn’t the only company to indemnify copyright.

Both Adobe and Shutterstock are offering indemnification for anything created using their AI image generators. I think this is a clear sign that it’s unlikely any of the current lawsuits amount to anything. Not a good sign for artists that are wanting compensation for their work being used in the training data for AI image generators.

But wait, it gets worse!

As mentioned in the last newsletter, OpenAI recently released DALLE 3. Unlike other large AI image generators, the kind-hearted folks at OpenAI have given creators a way to opt-out of having their work included in image generation. How you may ask? It’s simple really. Let me break it down for you.

Here are the steps:

  • Compile every single image you own the rights to

  • Upload all of these images to some form provided by OpenAI

  • Trust OpenAI will use these images to block generated images that look like yours

For many artists, this is an insane amount of work they simply won’t do. Of course, OpenAI knows this. In fact, I wasn’t even able to find this so called form where you upload your images. Whatever the case may be, OpenAI have given themselves an out.

Google drops the ball… again

Ah Google, I almost feel bad talking about them these days considering it’s almost never good news. Once again, it is not good news.

Firstly, Google plugged Bard into things like Travel, Workspaces and Youtube. From what I’m hearing, it’s terrible. I’m not surprised, there aren’t any good AI products that work really well with tools online. At least as far as my experience has been. Read more here.

More importantly, however, is the news that Amazon is investing up to $4 billion into Anthropic. Why is this bad for Google? Because Google has already invested $300m in Anthropic. A large part of the deal was to buy compute from Google which they will continue to do, but Google just lost another opportunity to invest big in another leading AI lab. Claude-2 is no joke, it’s comparable to GPT-4 in many cases.

What would be funny, and this is genuinely possible, is if six months down the line, Microsoft comes and offers Anthropic a boat load of money, several billion lets say, to make it their preferred cloud solution. This would give them a lot of training data for the new AI chips their working on called Athena. I would not be surprised to see this happen. Microsoft has been throwing money around like it’s nothing and this, once again, would give them an edge over Google.

There is, however, a bigger picture here.

Reality

I’ve said this a few times already but it cannot be stated enough. A deal like this highlights once again, the reality of AI and the future of the world. There are very few companies in the world that can finance building such powerful technology. There are even less companies with the talent to do so.

A deal this big always reminds me that the future of AI, and consequently the world, is really being built by a handful of companies. We are being pushed into a new age at the whims of these gigantic entities. We have zero control over how things play out. This is the reality of AI.

Someone made a statement in my last newsletter that it sounds like I am enthusiastic about companies having a monopoly in the AI space. I’m not enthusiastic, I’m realistic. The only alternative to the monopoly on AI is open-source. This is why I’m such a big fan of open-source.

Open-source is the reason “Open”AI even exists. It’s the reason “Generative AI” is a thing. If Google did not open-source their research on Transformers back in 2018, you would not be reading this newsletter; it wouldn’t even exist.

The next newsletters are all going to be about the latest open-source models - Mistral (yes, that one), Qwen, Phi, StableLM and more as well new research. If you’re interested in reading about those, consider supporting my work by subscribing to my premium newsletter.

I’m also looking for testimonials for this newsletter. If you would like to share your thoughts on my work and any benefit it has provided you, personally or professionally, I would love to feature it on my new website. Feel free to respond to this email or the poll below 🙏.

(I know this was long, others won’t be I promise 😅)

What'd you think of this edition?

Login or Subscribe to participate in polls.

As always, Thanks for reading ❤️

Written by a human named Nofil

Reply

or to participate.