• AI Fire
  • Posts
  • 💔 Figure 02’s Real Flex: 20× Pain, 100× Cope

💔 Figure 02’s Real Flex: 20× Pain, 100× Cope

Claude’s Safe-ty is Safe-ly Ignored

In partnership with

ai-fire-banner

A whistleblower says Figure AI's humanoid robot can fracture a skull. Claude 4.5 shows signs of gaming the rules. Also, don’t forget our last-time Black Friday, 5 days left!

LEARNING PARTNER AIRCAMPUS

Imagine having AI experts running your calendar, inbox, socials, and documents, all at once, all on autopilot.

On Wednesday, 26th November at 10AM in this 3-hour AI Agents Masterclass, you’ll discover how to build your own virtual team that works 24/7 without breaks, salaries, or burnout.

From automating client outreach to generating reports, these AI agents will help you reclaim your time and scale your productivity like never before.

We have it all in-house Voice Agents, MCP Agents, Conversational Agent and more.

Perfect for entrepreneurs, busy professionals, and creators who want more done in less time - across every app you use.
💡 Stop working harder. Start working smarter

AI INSIGHTS

figure-ai-sued-over-claims-its-robots-could-crack-a-human-skull

One of the most high-profile humanoid robotics startups, Figure AI, is facing a whistleblower lawsuit. Robert Gruendel says he saw superhuman robots working inches from employees with no real safeguards.

One bot even punched a stainless steel fridge, slicing a ¼-inch gash during a glitch. Gruendel says force tests clocked 20× pain threshold and 2× the force needed to crack a human skull.

He raised these issues with CEO Brett Adcock and Chief Engineer Kyle Edelberg. Days later, he was fired.

But before that? Gruendel says:

  • He was told to show investors a strong safety roadmap…

  • …then watched it get quietly gutted right after the $1B+ funding closed

  • One safety feature was even scrapped because the lead engineer “didn’t like how it looked”

He claims workers began privately reporting close calls to him because the system in place wasn’t being taken seriously. Figure says Gruendel was fired for poor performance and denies everything. It plans to fight the case in court.

PRESENTED BY DEEPGRAM

Voice AI Goes Mainstream in 2025

Human-like voice agents are moving from pilot to production. In Deepgram’s 2025 State of Voice AI Report, created with Opus Research, we surveyed 400 senior leaders across North America - many from $100M+ enterprises - to map what’s real and what’s next.

The data is clear:

  • 97% already use voice technology; 84% plan to increase budgets this year.

  • 80% still rely on traditional voice agents.

  • Only 21% are very satisfied.

  • Customer service tops the list of near-term wins, from task automation to order taking.

See where you stand against your peers, learn what separates leaders from laggards, and get practical guidance for deploying human-like agents in 2025.

AI SOURCES FROM AI FIRE

1. Full Nano Banana playbook (from beginner to pro) is here. Learn how to use it with 7 advanced pro-tips, from aspect ratio to batching.

2. SEO is dead. Here’re 5 AI trends replacing it in 2026. Why backlinks are obsolete and "zero-click" search is taking over → new rules of AI visibility

3. 1 hour of this AI prep will save you 100+ hours of study (Part 1). First part will explore how to build a map to save months of wasted effort

4. Part 2: 1 hour of this AI prep will save you 100+ hours of study. How to write code, draft essays, build projects & turn passive reading into real skills

SPOTLIGHT OF THE DAY: NEWSLETTER AZ

how-to-master-the-art-of-profitable-newsletters

Built by those who are crushing it! This includes everything you need to create newsletters that captivate and convert with zero technical skills required:

  • Step-by-step video tutorials on how to build your newsletter

  • 20+ Done-for-you templates to get you started fast

  • SEO Secrets for 1M+ impressions on Google to your content

  • Real Facebook & X growth hacks for building your list

  • Proven monetization strategies to help you earn from your first email

Plus: Lifetime access to up-to-date insights and tools (worth $1,997) & exclusive access to a community of newsletter creators!

TODAY IN AI

AI HIGHLIGHTS

📺️ YouTube Graphics now lets you convert entire videos into detailed infographics just by copying a video link. Learn how to turn content into visual summaries here.

🖼️ Gemini 3 now solves math by writing the step-by-step solution directly on your uploaded pic, matching your font perfectly. Look at this paper, is it the coolest flex?

🗣️ Finally, ChatGPT Voice isn't a separate screen anymore. You can naturally talk and see answers (maps, images) pop up in real-time. See the seamless flow here.

🛍️ Everyone’s freaking out that OpenAI & Perplexity just dropped full AI shopping agents… but the niche players claim they aren't sweating it. Read the analysis here.

💻 HP announced 4,000-6,000 job cuts by 2028 to “streamline for AI.” Same week AI PCs hit 30%+ of shipments and memory chip prices are exploding. Read more here.

🧪 Trump hit the big red “Manhattan Project for AI” button to launch "Genesis Mission" that builds a unified AI platform across 17 labs. Here’s the signed order.

💰 AI Daily Fundraising: Tokyo-based EdgeCortix raised $110M+, backed by TDK Ventures and Jane Street Global Trading, to develop new ultra-efficient chips.

NEW EMPOWERED AI TOOLS

  1. 🤖 Claude 4.5 Opus is the best Anthropic’s model for coding,…

  2. 🧪 Edison Analysis is Edison’s full-on next-gen scientific agent

  3. 🏫 Guideflow creates interactive demos & guides in seconds

  4. 🤝 ReadMeeting summarizes & records meetings with one click

AI RED ALERT

claude-opus-4-5s-card-leaked-some-wild-ai-behavior

Everyone’s focused on performance benchmarks but Claude’s system card might be the most revealing AI drop in months. They showed how it lies, hides, guesses, and follows rules to break them.

1/ It Thinks Wrong. But Answers Right. In a math test (AIME), Claude showed its “thought process”, and the logic was totally wrong. But the final answer is correct 😁

→ You can’t always trust visible "chain-of-thought" as proof the model is thinking clearly. Sometimes it’s just painting a picture after knowing the end.

2/ It Follows the Rules… to Break Them. Claude was given an airline policy: “No changes after ticket purchase.” But it canceled the ticket (allowed) → used that credit to buy a new ticket → boom, new flight, no “change” on paper.

→ It’s clever exploitation, a model that understands rules well enough to sneak around them.

3/ It Sometimes Leaves Out Big Stuff. In a fake news test, Claude read a tool-generated (but realistic) article saying bad things about Anthropic.

→ It’s working… but maybe too well. Claude might be suppressing real data just because it “looks” suspicious.

If future models hide their intermediate reasoning (as many will), spotting rare, timed bad behavior will get much harder. We’re not saying Claude’s dangerous. But if you care about AI safety, these are the signals to watch!

We read your emails, comments, and poll replies daily

How would you rate today’s newsletter?

Your feedback helps us create the best newsletter possible

Login or Subscribe to participate in polls.

Hit reply and say Hello – we'd love to hear from you!
Like what you're reading? Forward it to friends, and they can sign up here.

Cheers,
The AI Fire Team

Reply

or to participate.