💰 Money Talks: AI

✅ Boom, Not Bubble

ai-fire-banner

AI “bubble”? Budgets say no. Forget model drama. Follow adoption.
Coding is the first budget magnet - and startups are taking the AI app revenue.

IN PARTNERSHIP WITH 1LEARN

Imagine creating browser extensions, mobile apps, plugins, and micro-SaaS - all without writing a single line of traditional code.
That’s the power of Vibe Coding, a brand-new way of building with AI.

In this 3-hour Masterclass, you’ll learn how to turn your ideas into fully working apps using the most powerful AI tools - Lovable, Bolt, Cursor, Replit, Claude, Supabase and more.

🔥 What You’ll Experience:

  • 💡 Turn raw ideas into real apps (live demo)

  • 🛠️ Build using beginner-friendly AI platforms

  • 📱 Create your first AI-built app during the session

  • ⚡ Discover how Vibe Coding helps you build 10x faster

No coding. No technical background. No complexity.
🎯 Join the Masterclass and learn to build real AI products.

AI INSIGHTS

departmental-ai-market-map

People keep calling AI a bubble. But enterprise spend looks like a boom.

The curve:

  • 2023 → $1.7B

  • 2024 → $11.5B

  • 2025 → $37B (3.2× YoY)

Where the money goes (2025):

  • $19B → AI apps (tools teams use daily)

  • $18B → infra (model APIs, training, data/orchestration)

Buying beats building now:

  • 2024: 47% built / 53% bought

  • 2025: 24% built / 76% bought

AI converts fast:

  • 47% of AI deals reach production

  • SaaS average: 25%

PLG is the secret channel:

  • 27% of AI app spend is product-led (vs 7% in SaaS)

  • Add “shadow AI” (ChatGPT Plus on cards) → can be near 40%

Apps: startups are ahead

  • Startups have 63% of AI app revenue (up from 36% last year)

  • Examples: Cursor vs GitHub Copilot, Clay vs Salesforce, Rillet/Campfire/Numeric vs Intuit QuickBooks

Biggest spend buckets:

  • Departmental AI: $7.3B (coding is #1 at $4B, 55%)

  • Vertical AI: $3.5B (healthcare $1.5B, scribes $600M: Nuance DAX Copilot, Abridge, Ambience)

  • Horizontal AI: $8.4B (copilots dominate: ChatGPT Enterprise, Claude for Work, Microsoft Copilot)

Why it matters: The hype can swing. But budgets + production rollouts don’t lie. AI is becoming a default enterprise purchase - starting with coding and workflow automation.

PRESENTED BY BELAY

Q4 is the perfect window to turn this year’s numbers into a clear, actionable forecast aligned with your goals. Set your business up for a stronger 2026 with BELAY’s new guide.

AI SOURCES FROM AI FIRE

1. A-Z framework for finding profitable niches with AI. This is the exact AI process to find, validate & build a business around a profitable niche

2. Start a 1-person AI business with almost $0 (step-by-step). Learn to replace a full team & automate marketing, sales, design to start your solo business

3. Prompt vs Context Engineering: The AI battle you need to know. Discover the key differences and why you need both skills

NEW AI COURSE WORTH CONSIDERING

If you’re serious about building real passive income with AI (not “hype”), save this course. Our researcher, Mia, joined dozens of “passive income with AI” courses. Most were fluff. So she tested everything herself for months.

She built working systems from scratch. And it actually made money. Now she packaged the exact setup into this course so you can copy it step by step. Inside, you’ll get:

  • Step-by-step guides based on Mia’s real system

  • Automation playbooks for lead gen, content, outreach, and more

  • Fresh tools + templates added regularly ($499/month value)

💡 Pro tip: Follow one system at a time for 7 days. Don’t tool-hop. Copy Mia’s exact steps first, then tweak.

TODAY IN AI

AI HIGHLIGHTS

🎬 Disney just sent Google a cease-and-desist over “massive” AI copyright infringement claims - after Disney’s billion-dollar OpenAI deal. Frozen, Deadpool, Star Wars… it’s getting messy.

🤝 Bob Iger says the Disney - OpenAI Sora deal “does not in any way” threaten creatives, with guardrails + no voices/likeness. The Sam Altman interview is basically Disney’s public playbook for “AI, but licensed.”

🦾 1X just landed a big pipeline: EQT portfolio companies could get up to 10,000 Neo humanoids (2026 - 2030) for factories and warehouses. The “home robot” story is quietly turning industrial.

🔒 Google Research dropped Urania, a differential privacy pipeline to mine chatbot usage insights without reading anyone’s raw chats. And the twist: evaluators liked the private summaries more, up to 70% in one test.

🔞 OpenAI says ChatGPT Adult Mode is planned for Q1 2026, tied to stronger age verification and a split “teen vs adult” experience. Expect debate, because it’s now linked to personality freedom (and safety risks).

💰 Big AI M&A: IBM is acquiring Confluent for $11B ($31/share) to build a “smart data platform” for enterprise generative AI + AI agents, with the deal targeting mid-2026 close. The combo ties Confluent’s Apache Kafka real-time streaming with IBM’s hybrid cloud stack (Red Hat), and keeps Confluent’s ecosystem ties with Anthropic, AWS, GCP, Microsoft, and Snowflake.

NEW EMPOWERED AI TOOLS

  1. 🔬 Gemini Deep Research Agent is an autonomous research agent for devs, planning and synthesizing multi-step research.

  2. 🖌️ Visual Editor (Cursor) lets you edit web apps visually while the agent updates code alongside you.

  3. 🤝 Kaily is an always-on AI support agent, handling sales, support & onboarding across every channel.

  4. 📋 Korgi creates AI-built project boards from your productivity stack in under a minute.

AI BREAKTHROUGH

beyond-data-filtering-knowledge-localization-for-capability-removal-in-llms

Data filtering is messy: labels are wrong, risky info hides in normal text, and strict filters delete good knowledge too.

Anthropic’s team proposes SGTM (Selective GradienT Masking): split weights into retain (general) vs forget (danger). When training on labeled dangerous data, only the forget weights update. After training, you zero out those weights to remove the capability.

Key results:

  • Better remove vs. preserve trade-off than filtering (tested on a 254M Wikipedia model)

  • Harder to recover: needs 7× more adversarial fine-tuning vs common unlearning (e.g., RMU)

  • About 5% compute overhead

Limits: only small models so far, not full benchmark eval (e.g., WMDP), and prompts can still “re-add” knowledge at inference.

We read your emails, comments, and poll replies daily

How would you rate today’s newsletter?

Your feedback helps us create the best newsletter possible

Login or Subscribe to participate in polls.

Hit reply and say Hello – we'd love to hear from you!
Like what you're reading? Forward it to friends, and they can sign up here.

Cheers,
The AI Fire Team

Reply

or to participate.