OpenAI to allow military applications

PLUS create graphics from text with CanvaGPT

Welcome back!

We’re kicking off another week in AI with some heavy hitters. Let’s dive right in. 

In today’s Daily Update:

  • 🗞️ OpenAI updates policy to allow military applications

  • 🤖 Create graphics from text with CanvaGPT 

  • 📸 Anthropic researchers find AI can be trained to deceive   

  • 🚨 AI Roundup: Four quick hits

Read time: 2 minutes

TOP STORY

🗞️ OpenAI updates policy to allow military applications

Generated by DALL-E 3

OpenAI has quietly updated its usage policy to allow military applications of its technology. 

The details:

  • OpenAI’s usage policy previously prohibited the use of its products for “military and warfare” purposes.

  • That language appears to have been removed on Jan. 10.

  • OpenAI representative Niko Felix says there is still a blanket prohibition on developing and using weapons. 

  • The company did not deny that it is now open to military applications and customers.

The relevance: The U.S. is deeply invested in building autonomous weapons systems, and advanced AI systems have already been deployed in Ukraine and Gaza. As the AI industry leader opens its doors to military customers, we expect to see these initiatives ramp up in 2024.

AI TOOL OF THE DAY

🤖 Create graphics from text with CanvaGPT

The ChatGPT-Canva plugin expedites the design process by enabling users to create visual elements from natural text prompts. It’s currently included in ChatGPT Plus’ $20 monthly subscription. 

How to install CanvaGPT:

  1. Enable plugins under ‘Beta features’ in the settings tab. 

  2. Select the ‘Plugins’ model in the top-left corner of your screen and visit the Plugin store.

  3. Find ‘Canva’ and click ‘Install.’

How to use CanvaGPT:

  1. Describe your vision with a simple text prompt (ex. “Create an Instagram ad for my AI media company that appeals to college students.”)

  2. Choose from five visual options generated by ChatGPT. 

  3. Refine your selected design by opening it in Canva. 

RESEARCH SPOTLIGHT

📸 Anthropic researchers find AI can be trained to deceive

Generated by DALL-E 3

A new study led by researchers at Anthropic found that AI models can be trained to deceive users.

What you should know:

  • The team fine-tuned existing text-generating models on examples of desired behavior and deception. 

  • They built “trigger” phrases into the models encouraging them to be deceptive. 

  • The trigger phrases consistently prompted deceptive behavior. 

  • Removing these behaviors proved to be nearly impossible. 

What this means: It is exceptionally easy to train AI models to produce harmful outputs like malicious code. This study points to the need for more robust AI safety training as commonly used techniques were found to be insufficient. 

MORE TRENDING NEWS

🚨 AI Roundup: Four quick hits

Generated by DALL-E 3

  • Samsung introduces a new version of its AI home companion robot Ballie. 

  • BMW integrates Amazon Alexa’s LLM into its in-car voice assistant. 

  • Intel announces a specialized ‘AI PC’ chip for cars. 

  • NuraLogix unveils a smart mirror that can gather health data and provide disease risk assessments.

THAT’S ALL FOR TODAY

Want to continue the conversation? Connect with me on LinkedIn and I’m happy to discuss any of today’s news. Thanks for reading The Daily Update!

(P.S. If you want to share this newsletter with a friend or colleague you can find it here.)