GPT-4.5 Turbo launch leaked

Report: AI poses extinction-level threat to humanity

Welcome back!

A report commissioned by the U.S. State Department was published Mondy, warning that AI could potentially pose extinction-level threats to humanity. Not a great headline to accompany the rest of today’s news.       

Let’s dive in.

In today’s Daily Update:

  • 🗞️ OpenAI’s GPT-4.5 Turbo launch leaked

  • 🤖 AI poses extinction-level threat to humanity according to new report

  • 📸 Deepgram gives AI agents a voice  

  • 🚨 AI Roundup: Four quick hits

Read time: 2 minutes

TOP NEWS

🗞️ OpenAI’s GPT-4.5 Turbo launch leaked

DALL-E 3

GPT-4.5 Turbo’s product page was reportedly leaked on search engines like Bing and DuckDuckGo without an official announcement by OpenAI. 

What you should know:

  • GPT-4.5 Turbo is OpenAI’s fastest, most accurate and most scalable model to date. 

  • The model’s context window of 256,000 tokens is twice the size of GPT-4 Turbo’s. This means GPT-4.5 Turbo will be able to process roughly 200,000 words at once. 

  • The leaked page also says the new model’s knowledge cutoff is June 2024 (previously April 2023). 

  • Rumors suggest that GPT-4.5 Turbo could have video and 3D capabilities in addition to text and images, but the leak doesn’t confirm this. 

Why it matters: After Anthropic challenged GPT-4 with last week’s launch of Claude 3, OpenAI looks set to regain model leadership sometime soon. GPT-4.5 Turbo’s knowledge cutoff suggests a launch in June, potentially marking a groundbreaking release with rumored multimodal capabilities.

AI INSIGHT

🤖 AI poses extinction-level threat to humanity according to new report

DALL-E 3

A Gladstone AI report commissioned by the U.S. State Department warns that AI, in the worst case, could cause an “extinction-level threat to the human species.”

The rundown:

  • The authors spoke to over 200 government employees, experts and workers at leading AI companies like OpenAI, Google DeepMind, Anthropic and Meta. 

  • According to the report, leading AI labs expect AGI to arrive within the next five years. 

  • The report highlights the possibility of weaponization and loss of control over advanced AI systems.

  • The authors’ action plan recommends sweeping policy actions like making it illegal to train AI models above a certain level of computing power. 

Why it matters: This report, developed over 13 months, is based on highly credible sources. It is likely that the authors’ arguments hold significant merit. Still, it seems extremely unlikely that the report’s recommendations will be implemented amidst the fast-paced AI development race. 

BUSINESS SPOTLIGHT

📸 Deepgram gives AI agents a voice

Voice recognition startup Deepgram just launched Aura, a real-time text-to-speech API that helps developers build conversational AI agents. 

Key points:

  • Aura is Deepgram’s most advanced voice model. It typically begins speaking less than 0.3 seconds after being prompted. 

  • Deepgram says Aura is cheaper than all of its competitors at $0.015 per 1,000 characters. 

  • Aura currently offers seven male voices and five female voices. 

Breaking it down: Aura + LLMs (like ChatGPT) = conversational AI agents that can be deployed in customer-facing situations like call centers and restaurants. Click here to try Aura’s demo.  

MORE TRENDING NEWS

🚨 AI Roundup: Four quick hits

Source: HYODOL

  • South Korean startup launches an AI companion doll for seniors.

  • Midjourney accuses its rival Stability AI of causing outages to scrape its data. 

  • AI stocks surge after a strong Oracle earnings report shows that AI adoption is increasing. 

  • Robotics startup Covariant reveals a foundational model that gives robots the “human-like ability to reason.”

THAT’S ALL FOR TODAY

Want to continue the conversation? Connect with me on LinkedIn and I’m happy to discuss any of today’s news. Thanks for reading The Daily Update!

(P.S. If you want to share this newsletter with a friend or colleague you can find it here.)