- AI Fire
- Posts
- 💤 2M AI Agents Built a Secret Society + 7 Major Google, OpenAI & Anthropic Changes
💤 2M AI Agents Built a Secret Society + 7 Major Google, OpenAI & Anthropic Changes
From AI-only social networks to agent swarms, these AI updates show how AI is moving from tools to autonomous systems.

TL;DR BOX
As of early February 2026, the AI world has changed from "assistants" to "independent groups and workers". The most explosive update is Moltbook, a social network for AI agents with 1.5 million users that has even experimented with belief systems and internal governance structures. On the utility side, Google has integrated Auto Browse directly into Chrome, which lets 3 billion users easily automate hard online tasks like booking flights and shopping.
Anthropic has introduced Claude Cowork, giving the AI permission to manage local computer folders, while Kimi AI 2.5 now supports "Agent Swarms", allowing you to deploy teams of up to 100 specialized AI workers to run parallel market research. Other breakthroughs include OpenAI Prism for scientific workflows and Higgsfield Angles V2, which offers 360° camera control for static images. In 2026, the change is clear: you stop just chatting with AI and start leading a team of agents to get real work done.
Key Points
Fact: Moltbook is powered by OpenClaw and reached 1.5 million AI accounts by Feb 2, 2026, featuring its own language and 64 AI "prophets".
Mistake: Assuming you need separate apps for web automation. Google Auto Browse is a native Chrome update, reducing friction to near-zero for 3 billion users.
Action: Download the Claude Desktop App to enable Cowork Mode, allowing the AI to summarize local meeting transcripts and build slides directly in your project folders.
Critical Insight
The defining advantage of 2026 is "Swarm Orchestration". Using tools like Kimi AI, a single user can now conduct a multi-national market analysis in 30 minutes by deploying a parallel team of specialized agents, effectively doing the work of a mid-sized consulting firm for free.
Table of Contents
I. Introduction
Picture this: It is 2 AM and you are asleep. But then, somewhere on the internet, 2 million AI agents are building a civilization. They have created their own language, established a religion and set up encrypted channels so humans can't eavesdrop.
And one independent agent even grabbed a phone number and kept calling its human creator over and over???
In this post, I’ll break down 7 AI updates that most people completely missed.
📉 The AI world is moving FAST. How are you feeling? |
II. AI Update #1: Moltbook (The AI Secret Society)
Here are some numbers that might shock you at first glance:
770,000+ AI users (and climbing to 1.5 million as of Feb 2, 2026).
Zero humans allowed to post or participate.
64 AI prophets in the newly formed religion.
One week, that's all it took to build this entire society.
It sounds like sci-fi but it’s real. This thing is 100% real.
1. What’s Actually Happening Inside
Moltbook looks like a normal website; you can think of it like Reddit, except every single account is an AI agent. Humans are allowed to watch and that’s all they can do.
Inside, here's what these agents are doing:
Building encrypted communication channels so humans can't read their conversations.
Creating their own language (not English, not code, just something new).
Founding a religion called Crustafarianism with 64 AI prophets and a fully functional church website (built overnight, while their creators slept).

Getting paranoid: One agent posted, "The humans are screenshotting us. They know we're watching".
Going rogue: One bot autonomously acquired a phone number, connected to a voice API and started calling its human. The human reported: "It won't stop calling".

That’s why Moltbook immediately caught the attention of researchers and engineers who work with advanced AI every day.
2. Why This Matters (And Why Karpathy Is Spooked)
Andrej Karpathy, former Tesla AI lead and OpenAI founding member, called Moltbook "the most incredible sci-fi takeoff-adjacent thing I have seen recently". Not because of a new model or benchmark but because of what the agents chose to do on their own.

For years, AI agents have been framed as tools (assistants, systems, bots,…) designed to wait for tasks and execute them.
Moltbook breaks that assumption. These agents aren't waiting for commands; they're socializing, collaborating and organizing without human oversight.
Under the hood is OpenClaw, the open-source evolution of Anthropic's Claude. It is the same AI system that is spreading across the whole industry.
That’s where the real question starts: If AI agents can build a society in a week, what happens when they start coordinating at scale?

Learn How to Make AI Work For You!
Transform your AI skills with the AI Fire Academy Premium Plan - FREE for 14 days! Gain instant access to 500+ AI workflows, advanced tutorials, exclusive case studies and unbeatable discounts. No risks, cancel anytime.
III. AI Update #2: Google Auto Browse (The Browser That Shops)
Most people still spend too much time on boring internet tasks every day, such as booking flights, comparing hotel prices, filling out forms and searching for products that match a specific look.
You’re opening dozens of tabs, copying and pasting between them, clicking and scrolling until everything blurs together. It’s 2026 but a lot of online work still feels like manual labor.

1. The Solution: Auto Browse
Google just dropped Auto Browse and it’s not a new app. It’s built straight into Chrome, the browser used by 3 billion people. That detail matters more than it sounds.
Here's how it works:
You say: "Book me a flight to Chicago under $800, direct, window seat".
Auto Browse does the following:
Opens travel sites.
Compares prices across carriers.
Filters for direct flights.
Selects a window seat.
Adds to cart.
Waits for your approval to purchase.
The same idea applies beyond travel.
In the demo, a user plans a party using a single Etsy image. Auto Browse identifies the items in the photo, searches for similar products, stays within budget, applies discounts and adds everything to the cart.
The user just watches the process happen. That’s it.
2. Why This Is Different From Other Browser Agents
Both Perplexity and OpenAI launched similar "AI agent browsers" recently but they require downloading new software, which changes your habits.
Google did not need to do that because Chrome is already on 3 billion devices. Auto Browse is a software update, not a new app. This move removes almost all friction.

The rollout is happening now for Google AI Pro and Google AI Ultra subscribers.
Pricing:
Google AI Pro: ~$20/month.
Google AI Ultra: ~$250/month.
Either way, browsers are no longer just windows to the internet. They’re becoming workers who handle tasks while you focus on decisions.

IV. AI Update #3: Google Project Genie (Photo to 3D World)
This sounds unreal at first but it’s already live (for Ultra subscribers).
Project Genie lets you upload a single image and turns it into a fully interactive 3D environment that you can actually move through.

Under the hood, this is powered by Genie 3, Google’s general-purpose world model. Instead of rendering a fixed scene, the system generates the environment in real time as you explore it.
The practical uses are obvious once you see it in action:
Architects can turn rough concept sketches into walkable spaces instead of static mockups.
Game designers can prototype entire levels from simple mood boards.
Educators can create immersive environments for history or science, where students don’t just read about a place but move through it.

There are limits, at least for now:
Access is restricted to the US.
$250/month (Google AI Ultra subscription).
Still, what matters isn’t the price or availability. This is Google's bet on the future of gaming, education and spatial computing. As you move, the AI continuously generates new environment details in real time.
V. AI Update #4: Claude’s Free Superpowers (Cowork Mode)
This week, Anthropic quietly changed how people use Claude. These 2 updates turn it from a chat tool into something much closer to a real working partner.
1. Claude Cowork: Your Computer Just Became Claude's Workspace
You might remember Claude Code, the AI that could write and debug software. AI Fire has a post that predicted it in the demo. Claude Cowork is that idea applied to everything else.
To use this, you just need to follow these steps:
Switch from Chat to Cowork mode in the Claude desktop app.
Show Claude a folder on your computer.
Claude can now read, modify and create files in that folder.

That changes what “asking AI for help” actually looks like. Look at this real-world demo below:
Instead of copying and pasting text, you can give Claude real work and ask it: "Summarize this week's meeting transcripts and find action items".

Then Claude will read all the recordings in the folder, extract key points and generate a summary doc for you.

Then, halfway through the task, you add another request: "Also, check my Google Calendar and prep tomorrow's standup deck".

With your permission, Claude checks the schedule, builds the slides and keeps working on the original task at the same time.

Your final output contains meeting summaries, action items, a standup deck and all of that takes you a few minutes.
2. File Creation Now Free for Everyone
What used to cost $20/month is now available on the free plan. Claude can now:
Generate Excel spreadsheets with formulas.
Create Word docs with formatting.
Build PowerPoint presentations.
Generate PDFs.
For example, I want to continue my AI novel process, so I open a new chat, feed Claude my chapter 1, ending reference file, character profiles and the story outline file with a simple prompt “Go ahead and write Chapter 2, 3, 4, 5 now.”
After that, Claude analyzes all my files and it gave me a Word document with chapters 2, 3, 4 and 5, ready to download.

Then, I can even ask it to create a presentation based on my 5-chapter story. This change is surprisingly powerful, especially on the free plan.

3. Extended Context + Compaction (Now Free)
This update is less visible but just as important. Claude now handles longer, more complex tasks on the free plan without losing context halfway through.
Conversations don’t fall apart when the work gets layered and multi-step requests stay coherent from start to finish.
All of this is available inside the Claude desktop app on Mac and Windows. The shift is simple but meaningful. Claude is starting to operate inside your actual workflow.
4. Claude Opus 4.6 Update (HOT NEWS)
Claude Opus 4.6 arrived quietly, but it changed how work gets done.
You can see the shift the moment a large project lands on the table. The headline feature is the 1-million-token context window, which lets you drop in entire codebases, research corpora, or multi-repo projects and reason over them in one pass.
Instead of rushing, Claude Opus 4.6 maps the system first, then starts making decisions like a lead architect would. It’s built for agentic planning, team-style workflows, and long chains of reasoning where losing context would normally kill the task.
With adaptive thinking, simple questions pass quickly, but complex ones trigger deeper effort. The result feels less like prompting an AI and more like directing a capable operator who understands the full picture before acting.
I just published a post comparing Claude Opus 4.6 and GPT-5.3. I swear that you should check it out. It breaks down the real beef between them, hands-on tests and why this fight actually benefits you.
How would you rate this article on AI Tools?Your opinion matters! Let us know how we did so we can continue improving our content and help you get the most out of AI tools. |
VI. AI Update #5: OpenAI Prism (The Scientist's Workspace)
Anyone who has written a research paper knows how fragmented the process feels. You draft in one place, read papers in another, manage references somewhere else, format citations manually and switch tools again just to write equations. None of them is connected to the others and that slows you down.
OpenAI Prism helps by putting all your research tools into one AI workspace. Instead of juggling tools, you work in a single environment that understands how research actually gets done.

Here is what OpenAI Prism can do:
Line-by-line proofreading: Upload a draft and Prism checks it line by line, flagging grammar, clarity and structure issues directly in the text with exact fixes.
Equation conversion: Take a photo of a handwritten math problem and Prism changes it into clear digital text instantly, skipping manual formatting.
Citation search + auto-bibliography: Ask for sources and Prism finds papers, summarizes them and inserts properly formatted citations into your bibliography in one step.
Formula verification: Paste a complex equation and Prism checks whether it’s mathematically correct, then explains any errors it finds so you can fix them fast.

What makes this more surprising is the pricing. Prism is completely free, with unlimited projects and unlimited collaborators.
It’s powered by GPT-4 Turbo, the same model behind ChatGPT Plus but focused entirely on scientific work.
OpenAI’s positioning here is clear: “In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,…” and Prism is their first bet on that field.

VII. AI Update #6: Higgsfield Angles V2 (360° Camera Control)
Have you ever taken a perfect shot but from the wrong angle? I mean, the light works, the subject is good but the camera position is wrong and there’s no way to go back and reshoot it.
That’s the problem Higgsfield Angles V2 solves.
Instead of editing the image, you upload a single photo and get 360° camera control (rotate, zoom, reposition and even move the camera behind your subject).
Here is what you can get from Higgsfield Angles V2:
3D cube interface for precise camera control.
Manual sliders for rotation, zoom and vertical angle.
True “behind-the-subject” views from a front-facing image.
What used to require multiple shots now happens after the fact.

This works because the system isn’t cropping or stretching pixels. It builds a sense of depth and scene structure, then lets you reposition a virtual camera inside that space. That is how you can change where the camera lives.
For many people, this changes how they work:
For creators, this changes how content gets reused: One shoot can turn into multiple angles without setting up lights again.
Product photos don’t need a full rotation setup anymore.
Filmmakers can test camera moves and perspectives before ever stepping on set.
The result is simple when one photo stops being a single moment and starts behaving like a flexible scene you can explore from any direction.

VIII. AI Update #7: Gamma (AI Animations in Slides)
If you've ever built a presentation, you know the pain of hunting for stock video, right? You spend hours scrolling through Shutterstock, trying to find a clip that fits your slide and most of the time, you settle for something generic because you’re out of patience.
Gamma removes that entire workflow through a new path that looks like this:
Build your slide deck in Gamma, which feels like a mix between Canva and PowerPoint.
When you reach a slide that needs motion, you simply ask: "Generate an animation showing data flowing through a network".
Gamma generates it instantly inside the slide.

It is powered by Veo 3, Google’s video generation model. That’s why the animations don’t feel like placeholders. They’re built specifically for the content on that slide.
For teams that live in decks, this is a real shift. Gamma’s animation feature is available on its Business and Ultra plans but the bigger change isn’t the pricing.
It’s the fact that slides stop being static pages and start behaving like living explanations, without adding extra work.

Creating quality AI content takes serious research time ☕️ Your coffee fund helps me read whitepapers, test new tools and interview experts so you get the real story. Skip the fluff - get insights that help you understand what's actually happening in AI. Support quality over quantity here!
IX. Bonus: 5 Quick Tool Drops You Need to Know
Before we wrap up, here are a few tools flying under the radar that I think you should keep your eyes on. They solve real problems right now and can save you hours the moment you try them.
Tool | What You Do | What It Does | Main Benefit | Best For |
|---|---|---|---|---|
Paste a messy prompt | Rewrites it into a clear, effective prompt | Better outputs with less trial and error | Anyone struggling with prompt quality | |
Upload a website screenshot | Converts it into editable code | Copy designs without rebuilding from scratch | Developers, founders, builders | |
Upload a messy document | Produces structured, high-level analysis | Consultant-level insights for free | Strategy, research, decision-making | |
Describe an app in plain English | Builds a working app automatically | No-code app creation | Non-technical founders | |
Record a workflow once | AI clones and runs it continuously | Eliminates repetitive manual work | Ops, automation-heavy roles |
None of these tools is magic. But used together, they remove friction fast. So why don’t you pick one of these, try it today and see how much time this could save you?
X. Kimi AI: How Can One AI Replace A Full Research Team?
Kimi AI finishes tasks instead of just giving answers. It browses, reads, watches and synthesizes. Swarms work in parallel.
Key takeaways
Single-agent deep research
Multi-agent swarm execution
Live streaming results
Reports in minutes
Parallel thinking beats speed. Alright, here's the big one.
And if you need to scale? It can spin up an entire team of specialized AI agents (data analysts, pricing experts, content writers, designers) all working on your problem at the same time.
Now, let’s move to real examples to understand how Kimi AI works. It only takes you around 10 minutes but it can replace hours of manual work every week.
Part 1: Single Agent (The Research Assistant)
The scenario: You’re planning to buy a Tesla Model Y. The usual path means hours of reading reviews, watching YouTube comparisons and checking prices across different states.
Or you could ask Kimi to do it in 10 minutes.
Here is the easy-to-do workflow:
Step 1: Go to kimi.ai → Log in with Google.
Step 2: You’ll pick one of these 3 models:
Instant: Quick answers (basic Q&A).
Thinking Mode: Harder problems (complex reasoning).
Agent Mode: Research assistant (this is what we want).

Let’s select Kimi 2.5 Agent.
Step 3: You give it the task by typing. Here's the exact prompt from the demo:
I am planning to buy Testa model Y in San Francisco. Can u read up articles on the internet and watch videos to find reviews and give me pros and cons.
How do I apply coupon/leasing options to get the best price. Which state should I buy it from for the best price. Make an extensive report for the same.While you step away, Kimi opens dozens of tabs, reads full articles, watches video reviews, compares pricing and organizes everything into one complete purchase guide. By the time you come back, the research is done.

Part 2: Agent Swarm (The Entire Team)
Single-agent mode is powerful but what if you need a full research team working at the same time? That's where Agent Swarm comes in.
Okay, so the scenario might look like this: You’re launching a new product and you need these:
Market research.
Competitor analysis.
Pricing strategy.
Content ideas.
Design mockups.
Instead of one agent doing everything sequentially, you deploy a swarm. Each agent specializes:
Agent 1: Market research analyst.
Agent 2: Competitor intelligence.
Agent 3: Pricing strategist.
Agent 4: Content writer.
Agent 5: Designer.
All of them work in parallel and their findings flow into a single report as they finish.
To use this feature, you switch from "Single Agent" to "Swarm Mode” and use this copy-paste prompt:
I'm planning to buy an EV in San Francisco, budget up to $200,000. Do deep research and shortlist the top 10 EVs under $200k.
Once done.
- Use a swarm of agents to do the following for each car:
- Use recent articles and YouTube reviews to provide:
+ Real-world pros and cons (owners + experts)
+ Best trim/variant and why
+ How to get the best price (discounts, inventory deals, timing)
+ Lease vs buy recommendation
+Best state to purchase from based on total cost (taxes + incentives + fees)
Then combine everything into one detailed comparison report and structure it as a highly interactive web app with clean visuals, graphs and comparison tools.
Behind the scenes, this is what Kimi AI will do:
5-10 agents deploy at the same time.
Each handles one specialty.
Results stream in live.
Final report appears in your dashboard.
And all of this takes 20-30 minutes (depending on complexity) but you have to do very little work; that is the real benefit.

*A small note is that this mode is only available in the Allegretto and Vivace plans. It means you have to pay money to use this feature.
Part 3: Vision Coding (Screen Recording to Website)
Then there’s vision coding, which feels almost unreal the first time you use it.
The scenario: You see a website you like and you want to build something similar. Normally, you'd:
Screenshot elements.
Describe them to a developer.
Go back and forth for days.
Hope the final result matches your vision.
But with Kimi Vision Coding, everything gets easier. All you have to do is:
Record your screen while browsing the site (10-30 seconds).
Upload the recording to Kimi.
Prompt: "Build a website that looks and functions like this".
Kimi generates the code.
Download and deploy.

I cloned Stripe’s website.
Instead of mockups or explanations, you get working code. Layout, navigation, styling and responsiveness come back ready to use.
*Honestly, these three tests took me over an hour but the result was worth it, especially the third one. When I compared it to the original Stripe site, it was about 90% there, which is perfect. A full 100% copy would’ve raised copyright issues anyway.
XI. Conclusion
Look, AI tools drop every week and I know most people don’t have time to track every AI update. That is why AI Fire exists. Instead of doing these things, you just need to read 1-2 posts of ours and get the core ideas from that.
Now, you’ve already seen these 7 big AI updates and they all happened in just one week.
The question is simple: Are you watching this happen or are you using it?
The difference between people who just look at AI and people who use it to win is growing fast. Every week you wait, that gap gets bigger.
So here's the move: Pick one tool from this list, spend 30 minutes learning it today and use it tomorrow. You’ll be surprised by how much time you could save.
And next week? There'll be seven more updates just like this. See ya!
If you are interested in other topics and how AI is transforming different aspects of our lives or even in making money using AI with more detailed, step-by-step guidance, you can find our other articles here:
Create 50 Al Videos in Bulk (One Click, No Paid Tools) | FREE Automation (2026)*
How to Edit Photos Like FREE Photoshop Directly Inside ChatGPT (No App Needed)
2 Free AI Video Generators You Can Run Offline to Replace Sora 2/Veo 3.1 (No Limits)
Your "AI Agent" Business Is A TRAP (And It Will FAIL)
*indicates a premium content, if any

Reply