⚠️ AI Bias

Free Cursor Pro for a Year

In partnership with

ai-fire-banner

Read time: 5 minutes

AI is going rogue & getting disturbing. We see Elon Musk’s Grok AI offering to “remove her clothes” in public chats, and also fake accounts mimicking people with Down syndrome to scam on TikTok and YouTube. Is it getting out of control?

IN PARTNERSHIP WITH HONEYBOOK

Unlock AI-powered productivity

HoneyBook is how independent businesses attract leads, manage clients, book meetings, sign contracts, and get paid.

Plus, HoneyBook’s AI tools summarize project details, generate email drafts, take meeting notes, predict high-value leads, and more.

Think of HoneyBook as your behind-the-scenes business partner—here to handle the admin work you need to do, so you can focus on the creative work you want to do.

AI INSIGHTS

ai-shows-racial-bias-scores-white-higher-than-black-essays

Imagine an essay judged not by its content, but by who wrote it. A new study reveals a concerning truth: ChatGPT-4o may be giving better scores to White students than to Black students. Even though the AI is widely praised for efficiency and fairness, this bias - however subtle - raises a red flag about using AI for grading in schools.

Researchers analyzed ChatGPT’s performance using ASAP 2.0, a dataset of 24,000 student essays with demographic data (including race). Each essay was scored holistically (1–6) by trained human graders and by ChatGPT-4o using few-shot prompts.

🔍 Key Finding 1: ChatGPT Struggles With Score Differentiation

  • ChatGPT struggles to tell the difference between great and poor writing. Unlike human graders, who gave out more As and Fs, ChatGPT handed out a lot of Cs.

  • No essay received a score of 6 from ChatGPT, a red flag for recognizing top-tier writing.
    → In trying to be "fair," ChatGPT may favor mediocrity over excellence or need.

👥 Key Finding 2: Demographic Bias Was Minimal But Not Absent

  • Essays by White students received higher average scores from ChatGPT than those by Black students.

  • The gap was statistically and practically significant, and it was large enough to warrant some attention.

🤏 Other Group Comparisons Showed Minimal Bias

  • Gender: No practical difference in scores between male and female students.

  • ELL (English Language Learners): Slight statistical difference, but not enough to be meaningful.

  • Economic Disadvantage: Minor differences, aligned with human grading patterns.

Why It Matters: I think the truth is this disparity appeared in human-assigned scores. In other words, ChatGPT didn’t introduce new bias, but rather replicated the bias that already existed in the human scoring data. So if an AI can't grade fairly, especially across racial lines, it shouldn't be grading at all.

PRESENTED BY HUBSPOT

HubSpot offers an intuitive customer relationship management platform tailored for small businesses. Manage leads, track sales performance, and understand your customers with ease. Best of all, it’s completely free, with no limits on users or data, allowing you to store and manage up to 1,000,000 contacts.

TODAY IN AI

AI HIGHLIGHTS

🚀 Google's new “AI Max for Search campaigns” - a comprehensive suite that brings the best of Google AI - led to +14% more conversions without raising cost-per-acquisition.

🤖 Hugging Face released a Free Operator-like agentic AI tool called Open Computer Agent - same as OpenAI Operator yet Free. You can try the demo here.

💼 If we put 3 leading bots - ChatGPT, Gemini and Claude - through the same job interview in 5 rounds, guess who got hired after winning 4 out of 5 rounds? Tips to try based on the winning bot.

📱 Google’s iOS app now offers 'Simplify', powered by Gemini AI, to make complex texts - any jargon or technical concepts - easier to read without losing key details, accuracy, or nuance.

🚨 Elon Musk’s Grok AI is in hot water. It will 'Remove Her Clothes' in public, on X - sometimes reply directly in the thread, other times provide a link to a separate chat.

😟 A troubling trend is spreading on Meta, TikTok and YouTube. Over 30 AI-created accounts are impersonating people with Down syndrome to solicit donations falsely.

💰 AI Daily Fundraising: German company Parloa secured €105 million in Series C funding, led by top US investors, achieving unicorn status with a €1 billion valuation. Parloa's AI platform serves Fortune 200 companies.

AI SOURCES FROM AI FIRE

ai-fire-academy

NEW EMPOWERED AI TOOLS

  1. 📊 Korl turns raw data from other apps into customer-ready presentations.

  2. 📝 Schedodo transforms lectures, recordings or memos into organized notes.

  3. 🎥 TwelveLabs truly understands videos & generates notes from videos.

  4. 🗂️ Swatle is your all-in-one project manager with real-time AI assistants.

  5. 🔗 Explorium MCP lets your agents, apps or workflows work with live data.

AI QUICK HITS

  1. 💥 AI is already eating its own, prompt engineering is quickly going extinct.

  2. 🔮 Google just debuts an updated Gemini 2.5 Pro AI model ahead of I/O, +147 Elo on WebDev Arena.

  3. 🤫 Amazon is working on a secret AI agent to streamline software coding.

  4. 💔 AI of dead Arizona road rage victim speaks to killer in a powerful “Frankenstein of love".

  5. 💼 xAI partnered with Palantir to push “agentic workforce” with modular AI agents.

AI CHART

ai-understands-society-microsoft-responsible-leader

What if your next AI assistant not only answered your questions but also understood your cultural context, social norms, and public values? Microsoft’s “Societal AI” push could redefine how we build technology for the public good.

Societal AI is an emerging research discipline focused on how artificial intelligence interacts with social systems like education, labor, governance, and public services. It aims to bridge computer science and social sciences in AI development.

  • Aligning AI with diverse human values.

  • Designing AI to ensure fairness across cultures.

  • Making AI safe, reliable, and controllable.

  • Enhancing human-AI collaboration.

  • Evaluating AI performance in unforeseen scenarios.

  • Improving AI interpretability and transparency.

  • Understanding AI’s effect on learning and creativity.

  • Predicting AI’s impact on labor and business.

  • Transforming social science methodologies.

  • Evolving regulatory frameworks to enable cooperation and governance.

⚖️ Guiding Principles

  • Harmony: Build trust and minimize societal conflict.

  • Synergy: Enable new human-AI achievements.

  • Resilience: Keep AI adaptable to evolving social needs.

Microsoft is actively inviting academia, governments, and civil society to contribute. It truly wants to own the narrative and standard-setting around what responsible AI looks like.

Personally I don't think this is just about good PR, but I do wonder whether Microsoft will succeed. What do you think?

AI JOBS

We read your emails, comments, and poll replies daily

How would you rate today’s newsletter?

Your feedback helps us create the best newsletter possible

Login or Subscribe to participate in polls.

Hit reply and say Hello – we'd love to hear from you!

Like what you're reading? Forward it to friends, and they can sign up here.

Cheers,
The AI Fire Team

Reply

or to participate.