🤖 Stop AI Before It's Too Late

PLUS: A Look At Red Teaming & Beyond

small-banner

What's up? You're reading Inclined AI. Dust off your waffle maker and treat yourself this morning.

Here's what’s cooking:

  • 1,100+ notable names petition to stop AI development for six months

  • Red Teams help create safeguards, but what’s next?

  • More fashion brands are using generative AI

  • Journalists wonder about workers’ futures

A LOT OF TECH EXPERTS WANT AI COMPANIES TO STOP & THINK

A vast swath of people ranging from CEOs of large companies to prominent deep learning professors, wants AI research to slow down. 

It’s a stark reminder that AI will change how society operates and shake the foundations of our reality. This letter calls out the lack of long-term planning in model development and transparent safety and risk assessments. 

To put it another way: the big AI bus speeding down the interstate is fascinating to watch, but a bunch of people riding in it just realized there’s no driver.

Prompt: a bus balancing on a cliff Style: Canon Camera

Are we in over our heads?

The letter calls out GPT-4 as the final straw and quotes OpenAI to punch up its point. It’s clear they believe Sam Altman and Co. are the lead culprits.

Altman responded, claiming that Future of Life Institute (the writer of the letter) is preaching to the choir and that they’re constantly re-evaluating their research and safety guidelines.

But the real name getting mentioned all over this story is Elon Musk. 

Musk plans to launch his competitor to OpenAI soon after leaving the company in 2018. So some people claim he’s using this letter to try and delay OpenAI and give him a second to catch up.

Whether you buy that or not, the theme of this letter is “control.” Everyone is reaching for the steering wheel to try and grab ahold of the narrative. 

Does that mean we should stop or keep the AI bus rolling?

Check Out Our Sunday Edition

You can dive into more AI news and topics with us every week by subscribing to our premium edition.

We’ve written about:

This Sunday’s headline: The Many Faces of ChatGPT

If you’re not already subscribed, that’s okay. We’re offering a free 7-day trial, so you can read this one. That’s how excited I am to post it.

RED TEAMING IS PART OF A BIGGER SOLUTION FOR MAKING SAFER AI

The term “Red Team” sounds like something from a fake CIA novel or the Power Rangers movie, but it’s real. 

AI researchers do red teaming to discover what holes are in their safeguards and who might exploit them.

The measure is catching on in many circles because it diminishes the harm caused by current AI models, like ChatGPT and Bard.

But there’s always more we can do.

A recent WIRED article by an AI consultant and member of OpenAI’s GPT-4 red team lays out the idea of a “Violet Team.”

Violet teaming takes the concept further by having other researchers develop preventative tools using the same system. They can do this with time and early, transparent access.

Think of this like an “I’m rubber, you’re glue” situation, which adds a second layer of protection that companies can do in step with red teaming.

The idea allows us to take the power of something like GPT-4 and turn it on itself to ensure we hold it accountable and measured.

We all want a future where AI is safe to use and doesn’t cause harm.

It’ll take all of us and tons of cooperation, but there’s no need to slow down if we can all push for safety and take preventative steps like these.

Quick Nuggets

🫡 Vinod Khosla gave his thoughts on AI and he’s always worth listening too

📜 AI Rights: is this a thing we should be fighting for now?

🤬 Son of a B***h: this prompt makes ChatGPT cuss like a sailor

💰 Startup founders are trying to automate raising money since it’s so hard

❌ Chatbot errors happen, but why? The NYT explains

🤔 Workers are wondering if ChatGPT will replace them soon

📞 ChatGPT & Call Centers: is this place the perfect spot for AI models right now?

🤯 A new reality made with the help of AI

🧮 Math can change everything we know about chat AI

🥇 In the AI race, it’s nearly impossible to predict who’s ahead

🔮 The future of ChatGPT and more are all being discussed in a webinar today 

🇬🇧 The UK is trying to avoid adopting hardline rules for AI

🔍 People training AI are at risk of being replaced by AI

💾 Building a PC? Don’t rely on chat AI too much to help you out

💸 Jigso raised capital to create an AI assistant for company-wide use

👗 Fashion brands are finding more and more ways to use generative AI

🔥 Fresh Products

  • US Citizenship Test - this AI model helps you study (link)

  • FounderMate - helps you measure the market to find a fit (link)

  • Mealmind - all the planning in your kitchen done in one place (link)

  • Podsqueeze - get written content from your podcast (link)

  • AgendaAI - sets meeting agendas using Slack info (link)

  • MakeTales - create a unique short story to send to someone (link)

  • AskSumo - to find cost-effective app solutions (link)

  • Adrenaline - helps you find info in your GitHub Repository (link)

  • Nolan - AI-assisted script writing (link)

  • Momo - AI app for travel plans (link)

  • Socratic - helps managers find productivity insights (link)

Good Content, Homeless Pikachu

The next Pokèmon movie looks a little different. I’m getting more drama vibes from this image leak.

That’s it for today. I hope you enjoyed the latest edition of inclined.ai - Davis.