DeepSeek’s new AI model can generate 200K pages of training data daily on a single GPU | Technology News


Chinese AI startup DeepSeek has released a new multimodal AI model, which it said is capable of processing large and complex documents using significantly fewer tokens.

The Huangzhou-based company said that DeepSeek-OCR uses visual perception as a medium to compress text for large language models (LLMs) more efficiently. Both the source code and weights of the model are publicly available via online developer platforms Hugging Face and GitHub. In its research, DeepSeek found that using “vision encoders” to compress text for LLMs would enable them to process massive amounts of text at lower computing costs.

“Through DeepSeek-OCR, we demonstrate that vision-text compression can achieve significant token reduction (7-20×) for different historical context stages, offering a promising direction for addressing long-context challenges in large language models,” the company said in a technical paper accompanying the model’s release.

The launch of DeepSeek-OCR reflects the company’s continued focus on improving the efficiency of LLMs while driving down the costs of building and using them. The company is said to have taken a similar approach in developing its breakthrough open-weight models V3 and R1which made waves across the tech industry for achieving performance comparable to cutting-edge models like OpenAI’s o1 at only a fraction of the cost.

Story continues below this ad

Technical specs

With DeepSeek-OCR, the company aims to tackle a key limitation of LLMs: handling long contexts without running into memory limits. Its core hypothesis is that processing text as images can be more computationally efficient than processing raw digital text. The new OCR model serves as a proof-of-concept for this idea.

The model comprises two parts: a 380 million-parameter DeepEncoder used to analyse each image and produce a compressed version of it; and a 570 million-active parameter text generator built on top of another three billion-parameter mixture of experts (MoE) language model.

DeepSeek’s researchers said that they trained the OCR model with 30 million PDF pages in roughly 100 languages, including 25 million in Chinese and English, along with 10 million synthetic diagrams, five million chemical formulae, and one million geometric figures.

Performance on benchmarks

The OCR model is capable of compressing text by up to a factor of ten while retaining 97 per cent of the original information, as per the technical paper. It can be used to process a wide range of document types including plain text, diagrams, chemical formulae, and geometric figures while being able to keep the original formatting, output plain text, and even provide general image descriptions. However, the requirement of ‘vision tokens’ is also likely to vary based on the document size and image resolution.

Story continues below this ad

In sum, DeepSeek-OCR can generate training data for LLMs and vision language models (VLMs) at a scale of more than 200,000 pages per day while running on a single Nvidia A100 GPU.

The OCR model was evaluated on two benchmarks, the OmniDocBench test that is used to evaluate a model’s document parsing capabilities and the Fox benchmark test used to evaluate the focusing capabilities of vision language models on dense PDF documents.

“On OmniDocBench, it surpasses GOT-OCR2.0 (256 tokens/page) using only 100 vision tokens, and outperforms MinerU2.0 (6000+ tokens per page on average) while utilising fewer than 800 vision tokens,” the paper read.




Related Posts

Where did the gold on Earth come from – and is the Universe still making more? | Technology News

For centuries, kings, miners, and alchemists chased gold as the most precious substance on Earth. Rulers fought wars for it and raiders wiped out entire tribes to get it, while…

‘AI will make Elon Musk richer, leave millions jobless’: Nobel laureate Geoffrey Hinton | Technology News

Geoffrey Hinton, the Nobel Prize-winning AI/ML pioneer widely known as the ‘Godfather of AI’, has warned that the AI race could make tech billionaires like Elon Musk vastly richer even…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Harmanpreet Kaur immortalises historic World Cup win with tattoo, ‘forever etched in my skin’

  • By admin
  • November 5, 2025
  • 0 views
Harmanpreet Kaur immortalises historic World Cup win with tattoo, ‘forever etched in my skin’

ICICI Lombard policyholders alert! General insurance company has this update to claim your dues

  • By admin
  • November 5, 2025
  • 0 views
ICICI Lombard policyholders alert! General insurance company has this update to claim your dues

McDonald’s (MCD) Q3 2025 earnings

  • By admin
  • November 5, 2025
  • 0 views
McDonald’s (MCD) Q3 2025 earnings

Novak Djokovic slams Italian tennis chief’s ATP Finals claim, Serbian ace remarks ‘I don’t know where he got…’

  • By admin
  • November 5, 2025
  • 1 views
Novak Djokovic slams Italian tennis chief’s ATP Finals claim, Serbian ace remarks ‘I don’t know where he got…’

Naseeruddin Shah, Neena Gupta, Boman shake a leg, Saif Ali Khan joins Kapoor family for opening of Prithvi Festival 2025

  • By admin
  • November 5, 2025
  • 3 views
Naseeruddin Shah, Neena Gupta, Boman shake a leg, Saif Ali Khan joins Kapoor family for opening of Prithvi Festival 2025

At 37, Virat Kohli takes one step at a time: World Cup dream alive, legacy burning bright

  • By admin
  • November 5, 2025
  • 4 views
At 37, Virat Kohli takes one step at a time: World Cup dream alive, legacy burning bright