• AI Fire
  • Posts
  • 👑 22 of Top 50 AIs are Chinese

👑 22 of Top 50 AIs are Chinese

Top 100 Gen AI Web Apps from a16z Report

In partnership with

ai-fire-banner

Read time: 5 minutes

ChatGPT’s once-clear lead? Fading fast. A hacker used Claude to run a full-blown cybercrime spree. And a new benchmark just crushed most “smart” research agents.

LEARNING PARTNER AIRCAMPUS

💭 What if you had 4 AI Agents working for you while you sleep?

  • An agent that answers calls + emails 📞

  • An agent that manages your daily tasks 🗂️

  • An agent that runs your social media 24/7 📲

  • An agent that automates workflows end-to-end ⚡

In just 3 hours, during our live masterclass on Friday, August 29th at 10:00 AM EST, you’ll learn to build your own AI Agents Army - with zero coding.
Save 20+ hours a week
Scale your business without hiring
Automate your personal + professional life

AI INSIGHTS

googles-gemini-and-grok-start-closing-in-on-chatgpt

For a long time, ChatGPT felt untouchable. But according to a16z’s latest AI trends report, that lead is shrinking fast. Gemini, Grok, and even some low-key Chinese players are gaining momentum:

  • Google’s Gemini is now the #2 AI app on mobile and a 90% share on Android.

  • Grok’s a top-5 AI tool on the web and ranked #23 on mobile, with 20M+ users and growing.

  • Meanwhile, Meta is stuck at #46 on the web, not even ranked on mobile.

  • DeepSeek dropped 22% on mobile, 40% on web

  • Claude is holding on with steady web growth, but flat on mobile

  • Perplexity? Quietly climbing in both places

China’s AI makers are making global plays and they’re everywhere:

  • Doubao (ByteDance) is now #4 mobile AI app

  • Quark (Alibaba) ranks #9 on web

  • 22 of the top 50 mobile AI apps are made in China

Lovable and Replit, two vibe-coding startups, made the list for the first time. Apps like PixAI, Talkie, Seekee, AI Mirror, and Photo AI are right on the edge of mainstream.

Why It Matters: This is the first report where ChatGPT looked… surrounded. It still leads, but not by a mile anymore. Gemini is climbing. Grok is exploding. Claude is steady. And mobile is where the real AI war is happening now.

PRESENTED BY ROKU

Kickstart your holiday campaigns

CTV should be central to any growth marketer’s Q4 strategy. And with Roku Ads Manager, launching high-performing holiday campaigns is simple and effective.

With our intuitive interface, you can set up A/B tests to dial in the most effective messages and offers, then drive direct on-screen purchases via the remote with shoppable Action Ads that integrate with your Shopify store for a seamless checkout experience.

Don’t wait to get started. Streaming on Roku picks up sharply in early October. By launching your campaign now, you can capture early shopping demand and be top of mind as the seasonal spirit kicks in.

Get a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.

TODAY IN AI

AI HIGHLIGHTS

🧠 To see which AI is actually reliable, The Washington Post ran a 900-answer test for 9 tools from ChatGPT, Claude, Google, Perplexity, Meta & Grok. Here's the test.

💥 Ad maker PJ Ace did it again. Months after going viral with an AI ad for Kalshi (100M views), he just made another for David Beckham’s company IM8. Here’s the super-realistic video.

🤝 Anthropic and OpenAI just audited each other’s AI models for safety risks → a first-of-its-kind collab. You can read Anthropic’s findings on OpenAI (and vice versa) here.

📊 Anthropic analyzed 74,000 educator chats with Claude & released a new report at how AI is used in higher education. Turns out, professors are now quietly adapting too.

🌨 Google DeepMind’s experimental weather AI just nailed its first real-world test, predicting Hurricane Erin’s path more accurately than traditional forecasting methods.

💰 AI Daily Fundraising: FieldAI raised $405M to build one universal robot brain that works across all robot types. Their physics-based AI cuts risk, and is already used in energy, delivery, and construction.

AI TUTORIAL

Tavus lets you create lifelike AI humans that see, hear, speak, and respond instantly. And you don’t need a PhD or a dev team to launch one. You can spin up branded, emotionally-aware agents in hours, not months. Behind every Tavus agent:

Face-rendered visuals and voice
Real-time emotional intelligence
Memories, roles, and personality
Full brand control with white-labeling
Easy plug-in to any stack
Instant global reach, 24/7

Whether you need 1 or 10,000, Tavus scales with you. Now it’s time to turn AI into a human experience.

AI SOURCES FROM AI FIRE

ai-fire-group

NEW EMPOWERED AI TOOLS

  1. 🍌 Gemini 2.5 Flash Image with "nano-banana", SOTA image model

  2. CodeX builds webapps in minutes for free with 27+ AI models

  3. 💬 Onepard turns your content into a friendly chat page. No code

  4. 📸 ScreenshotReports creates branded reports from screenshots

AI QUICK HITS

  1. 📢 Here’s a viral system prompt for ChatGPT to speak like a real person

  2. 🎥 Alibaba just updates its video AI model with “film-quality avatars”

  3. 📈 Google Workspace’s offering a new Free AI video tool to Google Vids

  4. 💼 Stanford found AI killed 13% of jobs for 22–25 y/o in coding, support

  5. 👨‍💻 A hacker used Claude to run a full-scale cybercrime hitting 17 orgs

AI CHART

https://allenai.org/blog/astabench

AI2 just dropped AstaBench, a rigorous new benchmark suite built from the ground up to actually test scientific AI agents with 2,400+ real research tasks.

AstaBench is designed around 5 brutal truths about agent testing:

  1. Tasks must be real-world and hard

  2. Tool use must be controlled

  3. Costs must be tracked, accuracy isn’t everything if you’re burning $$$

  4. Standardization is key → same tools, same formats, less dev bias

  5. You can’t claim progress without knowing what you beat

They tested 57 agents across 22 architectures. Here’s the high-level result:

  • Asta v0 (their own agent): 53.0%uses 5 LLMs and 5 task-specific sub-agents

  • ReAct + GPT-5: 43.3%

  • Data analysis is still weak, no one scored above 34%

  • Literature understanding is strongest, 44 agents passed at least one benchmark

GPT-5 helped general agents (like ReAct), but oddly hurt specialized ones. So it might’ve been tuned specifically for ReAct-style workflows.

This is the benchmark to watch if you're building AI tools for research, medicine, engineering or anything where trust, reproducibility, and cost matter.

We read your emails, comments, and poll replies daily

How would you rate today’s newsletter?

Your feedback helps us create the best newsletter possible

Login or Subscribe to participate in polls.

Hit reply and say Hello – we'd love to hear from you!
Like what you're reading? Forward it to friends, and they can sign up here.

Cheers,
The AI Fire Team

Reply

or to participate.