Altman officially returns to OpenAI's board

Research: ChatGPT shows racial bias in recruitment

Welcome back!

An investigation into OpenAI CEO Sam Altman’s firing in November has finally reached a conclusion. We’ll also take a look at the latest move in the Musk-OpenAI saga.    

Let’s go.

In today’s Daily Update:

  • 🗞️ OpenAI reinstates CEO Sam Altman to board

  • 🤖 ChatGPT shows racial bias in recruitment

  • 📸 Musk’s xAI open sources Grok chatbot   

  • 🚨 AI Roundup: Four quick hits

Read time: 2 minutes

TOP STORY

🗞️ OpenAI reinstates CEO Sam Altman to board

DALL-E 3

OpenAI is reinstating CEO Sam Altman to its board of directors after an outside investigation into his firing in November

The details:

  • An outside investigation found that Altman’s firing was a “consequence of a breakdown in the relationship” that “did not mandate removal.”

  • OpenAI said it has “full confidence” in Altman’s leadership after the conclusion of WilmerHale’s investigation. 

  • Altman told reporters it was disappointing to see “people with an agenda” leaking information to hurt the company and its mission. 

  • OpenAI also added three women to its board: former Gates Foundation CEO Dr. Sue Desmond-Hellman, former Sony exec Nicole Seligman and Instacart CEO Fidji Simo.

Why it matters: OpenAI is turning the page on an incident that nearly destroyed the company late last year. The ChatGPT maker is now redirecting all efforts to the pursuit of artificial general intelligence.

AI INSIGHT

🤖 ChatGPT shows racial bias in recruitment

Source: Bloomberg

A new Bloomberg analysis found that using ChatGPT for recruiting poses a serious risk for automated racial discrimination at scale. 

The rundown:

  • Bloomberg analysts created fake candidates with identical educational and career backgrounds but different ethnicities and genders. 

  • GPT-3.5 was asked to rank the fake resumes 1,000 times. 

  • Resumes of Asian women were most likely to be ranked as the top candidate for a financial analyst role. Resumes of Black men were more likely to be ranked the lowest.

  • The experiment was repeated across four job postings. GPT’s gender and racial preferences differed depending on the job. 

The bigger picture: Bias in generative AI tools is a prevalent issue. OpenAI prohibits its products from being used to make automated hiring decisions that could violate candidates’ human rights. Generative AI could eventually enhance data-driven decision-making, but issues with gender and racial discrimination need to be addressed before it can be fully integrated into the recruitment process.

BUSINESS SPOTLIGHT

📸 Musk’s xAI open sources Grok chatbot

Getty Images

Elon Musk says his AI startup xAI will open source its Grok chatbot this week. The announcement comes days after Musk accused OpenAI of abandoning its mission of building AI to benefit all of humanity. 

Key points:

  • Last week Musk filed a lawsuit alleging that OpenAI shifted away from “freely available” AI models to maximize profits for Microsoft.  

  • Musk launched xAI last year to create “a maximum truth-seeking AI” as an alternative to OpenAI and Google’s tools. 

  • Open-sourcing Grok will give the public free access to its code, and potentially speed up innovations using the model’s infrastructure. 

  • Some experts have warned that open source AI models could be used by terrorists to create chemical weapons or develop superintelligence beyond human control. 

The relevance: On the surface, this just looks like the latest shot in Musk’s conflict with OpenAI. Open-sourcing generative AI models does tend to speed up innovation, but Grok isn’t quite on par with leading models like GPT-4, Gemini 1.5 and Claude 3. We’ll circle back to this move as developers build new iterations of Grok. 

MORE TRENDING NEWS

🚨 AI Roundup: Four quick hits

DALL-E 3

  • Siri Co-Founder Dag Kittlaus says Siri is set for a major upgrade in 2024. 

  • Nvidia is sued by authors over use of copyrighted works to train AI.

  • Miami teens are arrested for creating AI-generated nude images of classmates. 

  • Microsoft develops text-to-speech model that can clone voices and emotions. 

THAT’S ALL FOR TODAY

Want to continue the conversation? Connect with me on LinkedIn and I’m happy to discuss any of today’s news. Thanks for reading The Daily Update!

(P.S. If you want to share this newsletter with a friend or colleague you can find it here.)