Chinese AI startup DeepSeek has released two new AI models named V3.2 and V3.2-Speciale. According to the AI upstart, the new models demonstrated a performance on par with state-of-the-art models like GPT-5 and Gemini 3 Pro. The models were able to achieve this performance while bringing down costs and keeping them accessible under an open-source licence.
Story continues below this ad
The DeepSeek-V3.2 reportedly matches or is close to the performance of Claude Sonnet 4.5, GPT-5, and Gemini 3 Pro in use cases such as tool use, coding tests, etc. Meanwhile, the Speciale model hit gold-medal scores at the 2025 International Math Olympiad and Informatics Olympiad.

Describing the new DeepSeek-V3.2 model, the company said that its approach is built on three key technical breakthroughs – DeepSeek Sparse Attention (DSA), Scalable Reinforcement Learning Framework, and Large-Scale Agentic Task Synthesis Pipeline. The Chinese AI startup claimed that the DSA mechanism ‘substantially reduces computational complexity while preserving model performance’ and has been optimised for long-context scenarios. It essentially splits attention into two components.
Story continues below this ad
The new models use the DeepSeek-V3 Mixture of Experts transformer with about 671B total parameters and 37B active parameters per token. Reportedly, DeepSeek Sparse Attention is the only structural change that has been introduced through continued pretraining.
Moreover, the DeepSeek-V3.2 has also introduced some significant updates to its chat template when compared to earlier versions. The primary changes include a revised format for tool calling and the introduction of a ‘thinking with tools’ capability.
DeepSeek rose to prominence in January after it introduced its DeepSeek-V3 and DeepSeek-R1 models. These innovations matched the performance of OpenAI’s frontier models, specifically owing to their open-source nature, allowing anyone to build on top of them. The former DeepSeek-V3 model stood out for its Mixture-of-Experts architecture, which is essentially a team of specialist models working together to answer queries.
© IE Online Media Services Pvt Ltd





