- The Daily Update
- Posts
- This tech was too dangerous to be released
This tech was too dangerous to be released
Spotlight: AI essay mills help students cheat
Welcome back to The Daily Update — Happy Friday! It’s been a busy week in AI and we’re rounding off the week with a couple more heavy hitters.
🗞️ The Latest: Google and Meta developed facial recognition tech too dangerous to be released
🤖 AI Training: Summarize long text with Claude 2
📸 Industry Spotlight: Companies that use AI to cheat at school thrive on social media
🚨 AI Roundup: Four quick hits
🗞️ The Latest: Google and Meta developed facial recognition tech too dangerous to be released
Source: Ideogram
A new article published by The New York Times reveals that Google and Meta developed facial recognition tech that was so powerful they stopped it from being released.
What you need to know:
Meta was working on a hat with a camera that could identify anyone, while Google was working on a tool that could Google someone’s face and bring up other online photos of them.
Both companies decided that the technology was too dangerous to release to the public.
The tech giants also helped hold the technology back by acquiring the most advanced startups that offered it and shutting down their services to outsiders.
Clearview AI and PimEyes have reopened the gates to advanced facial recognition tech by releasing face search engines paired with millions of photos from the web.
My thoughts: The potential benefits of this tech are incredible, but its dangerous use cases highlight the need for safe AI guidelines. The coming wave of AI-powered facial recognition apps and devices will spark debates about technology’s role in undermining personal privacy.
🤖 AI Training: Summarize long text with Claude 2
Claude 2 can help you save hours by summarizing long documents and emails. It outperforms ChatGPT with a token limit of 100,000, meaning it can accept queries up to 75,000 words.
The prompt: “Summarize this text: [paste text].”
To make the output even easier to digest you can ask Claude to summarize the text in bullet form.
That’s it. Here’s the result:
📸 Industry Spotlight: Companies that use AI to cheat at school thrive on social media
Source: Ideogram
Essay mills that produce content for a fee are leveraging AI and social media to solicit student clients.
Key points:
Essay mills claim that they combine AI and human labor to create a final product that is undetectable by software designed to catch cheating.
Such mills are soliciting clients on social media despite the illegality of the practice in several countries.
According to a new analysis published by arXiv, 11 different AI companies are offering essay writing services.
Many of the tools appear to use custom prompts to produce the desired results through existing LLMs.
TikTok and Meta are working to remove the ads from their platforms.
The bottom line: This is the first time that students are starting an academic year with access to AI tools and cheating is already prevalent. Schools are in a tough spot because existing software designed to catch cheating is unable to detect most AI-generated content.
🚨 AI Roundup: Four quick hits
Source: The Sporting Tribune
AI Sports Fans: AI robots attend a football game at Sofi Stadium in Los Angeles to promote a movie called “The Creator.”
Modern Military: German military invests millions into AI weapons tests.
Big Blunder: Microsoft publishes AI-generated obituary calling deceased NBA player “useless.”
Medical AI for All: Microsoft open sources EvoDiff, a novel protein-generating AI.
Have questions or thoughts on today’s newsletter? Reply to this email and I’ll get back to you as soon as I can. Thanks for reading The Daily Update and have a great weekend.
Jack
(P.S. If you know someone that may find today’s newsletter useful, send it their way!)