LLMs Get Political

Covering your password won't be enough + Researchers are using AI to write papers + Customise ChatGPT

Welcome to edition #23 of the “No Longer a Nincompoop with Nofil” newsletter.

Here’s the tea ☕

  • Bias in, Bias out. This research paper analyses the political bias of each major LLM 🪞

  • AI can steal your password by hearing you type 🔉

  • Many researchers have been found to have used ChatGPT in their papers 📝

  • You can now customise ChatGPT for free 🤖

Build Your Own AI Tools!

You don't have to be an engineer anymore to build your own tools that help you harness the benefits of AI and use it to your advantage. You can use PixieBrix. With the new PixieBrix AI Certification, you can learn how to build your very own AI tool with ChatGPT. That's right. You can build the perfect tool to improve your workflows and automate actions.

Who’s opinions are LLMs sharing

A new research paper has highlighted the inherent political bias in all major LLMs. I’m not surprised to see OpenAI’s GPT-4 being so far left but I am surprised to see Meta’s LLaMA being the most right-wing. People easily forget that these models are trained on the tonnes of text data from us; you, me and anyone else who posts online. What’s more bias than a human with the anonymity of the internet? This is why companies are spending significant resources to figure out how to align AI models.

This is also one of the reasons why open-source is so valuable. We can see what a model’s been trained on, we know what it has seen. We have no idea what GPT-4 has been trained on. This isn’t even taking into account RLHF, which for better or worse, is one of the worst things about LLMs, especially Llama 2.

Regardless, people need to remember that these models will output what they’ve been intended to output. If they like one politician over another, it is because it is meant to be so. Perhaps not all the time, but certainly in some cases. The human trains AI which is trained on the humans output. The cycle of AI.

The scary part? Research by Google suggests that LLMs tend to repeat a user’s opinions back to them, even if it’s wrong. They’re sycophantic and people are using them as therapists. Probably not a good idea.

AI powered scams are going to be ruthless

A group of researchers from the UK have trained an AI model to listen to your keystrokes and identify what you’re typing. That’s right, the model can accurately detect what you’re typing by simply hearing you do it; they even claim a 95% accuracy. The researchers used a deep learning model which translates the sound of one’s typing into a visual representation which is then analysed. Referring to it as an “acoustic side-channel attack” (ASCA), the model itself was created by Google called CoAtNet which is an image classification model.

Now I don’t think criminals are nearly smart enough to create something like this, but I wouldn’t put it past them. This is a mere indication of the types of things that could be done, if the right people tried hard enough. What is going to be really interesting is when people give AI models the docs to running malicious software.

A different research paper found LLMs to be very effective in reading the documentation of a tool and then “understanding” how to use it. Simply imagine a scenario where someone trains a dozen AI agents on a malicious phishing software and they go out and attempt to scam people. We have no idea what could be done to prevent someone from doing this. Could it be done? Certainly. Has it been done? Perhaps. Will it be done? Absolutely.

ChatGPT researchers are out in force

A few days back, if you were to search up “As an AI language model” in Google Scholar you would have found numerous papers have the AI’s classic line in them. I can’t even fault people for using AI to write things anymore, it’s really good. But to leave this particular line in your paper that you put out there on the internet is downright ridiculous. What makes it even funnier is when it’s used in a “DEFENSE AGAINST CYBER-ATTACKS” paper.

These are all the papers that left the obvious trail of AI use in them. Imagine how many more were smart enough to cover their tracks. With no concrete way of determining if something was written with AI, it’s impossible to tell what’s written by a human. Welcome to the future.

Customise ChatGPT for free

OpenAI released custom instructions this week and it’s actually a big deal. Isn’t it really annoying when it continuously says “As an AI language model” in your conversations? Well, there won’t be anymore of that. Custom instructions let you tailor ChatGPT to act a certain way based on what you tell it. It’s a very handy change and a welcome one considering it doesn’t require a Plus Subscription.

Extra Reading

  • OpenAI, Google, Microsoft and Anthropic partnered with DARPA for their AI cyber challenge [Link]

  • PlayHT released their new text-to-voice ai model and it looks crazy good. Change the way its delivered by describing an emotion and much more [Link] [Link]

  • A paper by Google showcasing that AI models tend to repeat a user’s opinion back to them, even if its wrong. Here’s a thread breaking it down [Link]

  • Medisearch comes out of YC and claims to have the best model for medical questions [Link]

  • Someone made a way to one-click install AudioLDM with gradio web ui [Link]

  • A way to make llama-2 much faster [Link]

  • Nvidia released the code for Neuralangelo, an AI model that reconstructs 3d surfaces from 2d videos [Link]

  • Create digital environments in seconds with Blockade labs. Wild stuff [Link]

  • Layerbrain is building AI agents that can be used across Stripe, Hubspot and slack using plain english [Link] Looks very cool

  • A great article on LLMs in healthcare [Link]

  • Implement text-to-SQL using langchain, a breakdown[Link]

  • SDXL implemented in 520 lines of code in a single file [Link]

pssst. If you’d like to learn how your business can leverage AI, feel free to reply to this newsletter 🙂

As always, Thanks for reading ❤️

Written by a human named Nofil

Reply

or to participate.