Nvidia's Nemotron coalition brings eight AI labs together to build open frontier models
Nemotron Coalition's first project is a base model co-developed with Mistral AI and open sourced on release.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Nvidia today announced the Nemotron Coalition at its GTC conference in San Jose, California, recruiting eight AI companies to co-develop open frontier models on NVIDIA DGX Cloud, with the work feeding into the upcoming Nemotron 4 model family. Alongside the coalition, Nvidia released a new generation of open models spanning agentic AI, robotics, autonomous vehicles, and drug discovery.
The founding members are Black Forest Labs, Cursor, LangChain, Mistral AI, Perplexity, Reflection AI, Sarvam, and Thinking Machines Lab — the latter of which was founded by Mira Murati, who served as CTO of OpenAI until her departure in 2024.
"Open models are the lifeblood of innovation and the engine of global participation in the AI revolution," said Nvidia CEO Jensen Huang in an official press release.
Article continues belowThe coalition's first deliverable is a base model co-developed by Nvidia and Mistral AI, trained on DGX Cloud. Other members will contribute data, evaluation frameworks, and domain expertise during post-training, and Nvidia plans to open-source the model on completion and said it will “underpin” the upcoming Nemotron 4 family of models.
Contributions across the coalition span multimodal capabilities from Black Forest Labs, real-world coding performance benchmarks from Cursor, and agentic tool-use and long-horizon reasoning evaluation from LangChain, which reported over 100 million monthly downloads of its AI frameworks.
Nemotron 3 Ultra — the new flagship of the Nemotron family — runs on Nvidia's Blackwell platform and claims 5 times throughput efficiency using the NVFP4 numerical format. Nvidia has said it will power “AI-native applications,” including coding assistants and complex workflow automation. Meanwhile, Nemotron 3 Omni adds audio, vision, and language understanding in a single model, while Nemotron 3 VoiceChat handles real-time simultaneous listen-and-respond conversations by combining automatic speech recognition, LLM processing, and text-to-speech in a unified system.
For robotics, Isaac GR00T N1.7 — an open reasoning vision language action (VLA) model purpose-built for humanoids — is now commercially viable for real-world deployment. Huang also previewed GR00T N2 during his keynote, a next-gen robot foundation model that currently ranks first on both MolmoSpaces and RoboArena for generalist robot policies. “Built on a new world action model architecture, the model helps robots succeed at new tasks in new environments more than twice as often as leading VLA models,” says the official press release. Nvidia expects to ship GR00T N2 by the end of 2026.
Get 3DTested's best news and in-depth reviews, straight to your inbox.
Last but not least, Cosmos 3, a world foundation model unifying synthetic environment generation and physical AI reasoning, is also expected to arrive later this year, while the new Proteina-Complexa model — forming part of the BioNeMo platform — targets protein binder design for drug discovery; Novo Nordisk, Viva Biotech, and Manifold Bio are listed as early adopters.
Huang also announced NemoClaw, a software stack for the OpenClaw open-source agent platform. NemoClaw installs Nemotron models and the new OpenShell runtime in a single command, adding a sandboxed privacy and security layer beneath autonomous AI agents. It can run on any dedicated platform, including GeForce RTX PCs and laptops, RTX PRO workstations, DGX Station, and DGX Spark. A local privacy router lets agents tap cloud-based frontier models while keeping data processing on-device when required.
Select Nvidia open models will be available on GitHub, Hugging Face, and as NIM microservices for deployment on Nvidia-accelerated infrastructure.
Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
