- The Daily Update
- Posts
- ChatGPT went insane
ChatGPT went insane
Spotlight: Google pauses AI image generation
Welcome back!
We’re getting this week started with some wild stories about generative AI gone wrong. OpenAI and Google both faced some interesting challenges last week.
Let’s dive in.
In today’s Daily Update:
🗞️ ChatGPT briefly goes ‘insane’
🤖 Stability AI announces Stable Diffusion 3
📸 Google pauses AI image generator
🚨 AI Roundup: Four quick hits
Read time: 2 minutes
TOP STORY
🗞️ ChatGPT briefly goes ‘insane’
DALL-E 3
Last week ChatGPT users shared unexpected outputs from OpenAI’s chatbot on Reddit. Some Redditors reported that the chatbot was “going insane.”
The details:
Some users shared Shakespearean rants that ChatGPT generated during their conversations.
Others posted examples of ChatGPT losing its ability to respond to questions coherently.
In most cases, the chatbot’s responses began coherently before turning into (sometimes Shakespearean) nonsense.
OpenAI acknowledged the bug and fixed it last Wednesday.
The bigger picture: ChatGPT going hilariously off-script serves as a reminder that AI is still far from achieving a level of autonomy that is threatening to us. Developers are still a long way from perfecting AI systems that rival human intelligence.
Example of ChatGPT Shakespearean rant
AI TOOL OF THE DAY
🤖 Stability AI announces Stable Diffusion 3
Source: Stability AI
Stability AI just revealed its most capable text-to-image model to date. Stable Diffusion 3 features improved performance in multi-subject prompts, image quality and spelling abilities.
Sample outputs:
Stable Diffusion 3 is currently available in early preview. Sign up to join the waitlist here.
BUSINESS SPOTLIGHT
📸 Google pauses AI image generator
Source: The Verge
Google has paused Gemini’s image generator after it created inaccurate historical pictures such as images of racially diverse Nazi soldiers.
Key points:
Social media users initially complained that the tool depicted historical figures like the U.S. Founding Fathers as people of color.
Gemini also created diverse images of Nazi-era German soldiers (pictured above).
Google apologized for the issue and paused the image generation of people.
The problem appears to stem from an attempt to counteract the gender and racial stereotypes found in generative AI.
Why it matters: Fixing bias in generative AI tools is crucial to the future of ethical AI development, but Google missed the mark here. This shows how difficult it is to strike a balance between mitigating bias and maintaining accuracy.
MORE TRENDING NEWS
🚨 AI Roundup: Four quick hits
DALL-E 3
Smartphone giants are preparing to hype up “AI phones” this year.
Political consultant Steve Kramer admits to commissioning fake Biden robocalls in New Hampshire.
Reddit agrees to $60 million deal allowing Google to train AI models on its posts.
Google DeepMind researchers find a way to improve language models’ logical reasoning.
THAT’S ALL FOR TODAY