Mistral 3.1 Takes The Lead

Your Daily Dose of AI Goodness

Google Launches Gemini Canvas and Audio Overview

Google introduces Canvas, an interactive workspace for creating and editing content within Gemini. The update also adds Audio Overview, creating podcast-style summaries from documents.

 

Mistral Small 3.1: Beating even the new Gemma 3 27B!

The TLDR
Mistral AI releases Small 3.1, a 24B parameter model outperforming Gemma 3 27B and GPT-4o Mini on benchmarks. The Apache-licensed model runs on consumer hardware while handling images, multiple languages, and 128K context windows.

Mistral AI presented its latest open source model yesterday: Mistral Small 3.1. With its 24 billion parameters, it outperforms comparable models such as Gemma 3 and GPT-4o Mini in various benchmark tests - and does so at an impressive 150 tokens per second.

What makes the model special is that it combines multimodality, multilingualism and an extended context window of 128,000 tokens under the Apache 2.0 license. The hardware efficiency is remarkable - the model runs on a single RTX 4090 or a Mac with 32GB RAM, which makes it ideal for on-device applications.

The possible applications are diverse: from fast conversation assistants and function calling to domain-specific fine-tuning. The potential for reasoning applications, such as those already developed on the predecessor model, is particularly exciting.

Mistral Small 3.1 is now available via Hugging Face (both as a base and Instruct version) and via Mistral's developer platform “La Plateforme”. Other cloud providers such as Google Cloud Vertex AI, NVIDIA NIM and Microsoft Azure AI Foundry will follow in the coming weeks.

A remarkable step for the open source AI community, making compact models with premium performance more accessible.

Start learning AI in 2025

Keeping up with AI is hard – we get it!

That’s why over 1M professionals read Superhuman AI to stay ahead.

  • Get daily AI news, tools, and tutorials

  • Learn new AI skills you can use at work in 3 mins a day

  • Become 10X more productive

 

Tencent Releases New 3D Generation Models

Tencent announces two upgraded 3D generation models on Hugging Face: 3D 2.0 MV and 3D 2.0 Mini. The multi-view generation capabilities enhance 3D content creation.

 

Ultra-Fast Document Reading with SmolDocling

SmolDocling delivers state-of-the-art OCR performance using only 0.5GB VRAM. The 256M open-source model processes documents in 0.35 seconds, outperforming much larger competitors.

 

Reply

or to participate.