OpenAI and Anthropic join forces on agents

Plus: what 50M agent queries reveal, Shopify’s agentic storefronts, and more...

Edition 143 | December 11, 2025

Can someone wrap this up and put it under my tree, and throw in three months of Gemini Pro, Claude, and xAI subscriptions as stocking stuffers?

Welcome back to Building AI Agents, your biweekly guide to everything new in the field of agentic AI!

In today’s issue…

  • Agentic AI Foundation launches under Linux

  • DeepSeek V3.2 challenges frontier models

  • What 50M agent searches reveal about real usage

  • 2,700 agents built to design lunar hardware

  • This week’s top agent courses

…and more

🔥 INCASE YOU MISSED IT

Readers’ favorite items from the past week

  1. Google announced Workspace Studio, their entry into the no code agent builder, and why this is big

  2. Vercel replaced a $1M/year SDR team with a $1,000/year lead agent

  3. 📊 MIT’s new Iceberg Index tracks 13,000 AI agents currently doing human tasks and what it revealed

  4. A low-code Agentic AI course for tech professionals from Interview Kickstart

  5. 5 step guide to building an AI lead gen agent (Gumloop)

📌 THE BRIEFING

Source: Building AI Agents

OpenAI, Anthropic, and Block have co-founded the Agentic AI Foundation (AAIF), a new organization under the Linux Foundation dedicated to building open standards for AI agents. Each company is donating a flagship project: Anthropic contributes MCP (Model Context Protocol), now the de facto standard for connecting models to tools with 10,000+ published servers; Block donates goose, its open-source agent framework; and OpenAI brings AGENTS.md, already adopted by 60,000+ repos to give coding agents project-specific guidance.

AWS, Google, Microsoft, Bloomberg, and Cloudflare are joining as platinum members. The major players are betting agent interoperability should be open infrastructure, not proprietary lock-in. For builders, this means the foundational plumbing for multi-agent systems is consolidating around shared standards rather than fragmenting across vendor silos.

DeepSeek released V3.2, its most capable open-source model yet, claiming performance on par with GPT-5 and reasoning matching Gemini 3.0 Pro. The release introduces Sparse Attention, a new architecture that cuts compute costs for long-context tasks while preserving output quality, a meaningful edge for cost-conscious agent deployments.

V3.2 is also the first DeepSeek model to integrate thinking directly into tool-use, trained on a massive agentic pipeline spanning 1,800+ simulated environments and 85,000+ complex instructions. A high-compute variant called V3.2-Speciale scored gold on the 2025 International Math Olympiad. For agent builders, V3.2 offers frontier model capabilities at open-source prices.

🤝 WITH LEVANTA

Why AI Isn’t Replacing Affiliate Marketing After All

“AI will make affiliate marketing irrelevant.”

Our research shows the opposite.

Shoppers use AI to explore options, but they trust creators, communities, and reviews before buying. With less than 10 percent clicking AI links, affiliate content now shapes both conversions and AI recommendations.

 🤖 AGENT OF THE WEEK

👋 No Agent of the Week this week. We’ve got some big updates and releases coming—stay tuned for next week!

While we wait, I want your input on where to take this section next.

What should we do with "Agent of the Week"?

Help shape the future of Agent of the Week

Login or Subscribe to participate in polls.

Till next week,
✌️ AP

P.S. If you have more you would like to share, just reply to this email with your thoughts! I read every email

Subscribe to keep reading

This content is completely free, but you must be subscribed to Building AI Agents to continue reading.

I consent to receive newsletters via email. Terms of use and Privacy policy.

Already a subscriber?Sign in.Not now