DeepSeek’s new AI model can generate 200K pages of training data daily on a single GPU | Technology News


Chinese AI startup DeepSeek has released a new multimodal AI model, which it said is capable of processing large and complex documents using significantly fewer tokens.

The Huangzhou-based company said that DeepSeek-OCR uses visual perception as a medium to compress text for large language models (LLMs) more efficiently. Both the source code and weights of the model are publicly available via online developer platforms Hugging Face and GitHub. In its research, DeepSeek found that using “vision encoders” to compress text for LLMs would enable them to process massive amounts of text at lower computing costs.

“Through DeepSeek-OCR, we demonstrate that vision-text compression can achieve significant token reduction (7-20×) for different historical context stages, offering a promising direction for addressing long-context challenges in large language models,” the company said in a technical paper accompanying the model’s release.

The launch of DeepSeek-OCR reflects the company’s continued focus on improving the efficiency of LLMs while driving down the costs of building and using them. The company is said to have taken a similar approach in developing its breakthrough open-weight models V3 and R1which made waves across the tech industry for achieving performance comparable to cutting-edge models like OpenAI’s o1 at only a fraction of the cost.

Story continues below this ad

Technical specs

With DeepSeek-OCR, the company aims to tackle a key limitation of LLMs: handling long contexts without running into memory limits. Its core hypothesis is that processing text as images can be more computationally efficient than processing raw digital text. The new OCR model serves as a proof-of-concept for this idea.

The model comprises two parts: a 380 million-parameter DeepEncoder used to analyse each image and produce a compressed version of it; and a 570 million-active parameter text generator built on top of another three billion-parameter mixture of experts (MoE) language model.

DeepSeek’s researchers said that they trained the OCR model with 30 million PDF pages in roughly 100 languages, including 25 million in Chinese and English, along with 10 million synthetic diagrams, five million chemical formulae, and one million geometric figures.

Performance on benchmarks

The OCR model is capable of compressing text by up to a factor of ten while retaining 97 per cent of the original information, as per the technical paper. It can be used to process a wide range of document types including plain text, diagrams, chemical formulae, and geometric figures while being able to keep the original formatting, output plain text, and even provide general image descriptions. However, the requirement of ‘vision tokens’ is also likely to vary based on the document size and image resolution.

Story continues below this ad

In sum, DeepSeek-OCR can generate training data for LLMs and vision language models (VLMs) at a scale of more than 200,000 pages per day while running on a single Nvidia A100 GPU.

The OCR model was evaluated on two benchmarks, the OmniDocBench test that is used to evaluate a model’s document parsing capabilities and the Fox benchmark test used to evaluate the focusing capabilities of vision language models on dense PDF documents.

“On OmniDocBench, it surpasses GOT-OCR2.0 (256 tokens/page) using only 100 vision tokens, and outperforms MinerU2.0 (6000+ tokens per page on average) while utilising fewer than 800 vision tokens,” the paper read.




Related Posts

‘AI will make Elon Musk richer, leave millions jobless’: Nobel laureate Geoffrey Hinton | Technology News

Geoffrey Hinton, the Nobel Prize-winning AI/ML pioneer widely known as the ‘Godfather of AI’, has warned that the AI race could make tech billionaires like Elon Musk vastly richer even…

OnePlus unveils chip-level gaming architecture for smoother, high-frame-rate mobile play | Technology News

OnePlus, on Monday, November 3, announced its new mobile gaming technology architecture. The company claims that the latest innovation from its stable will significantly enhance the performance and stability of…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

Paytm Q2 Results: Fintech major’s net profit at ₹21 crore; revenue jumps 24% — Details here

  • By admin
  • November 5, 2025
  • 2 views
Paytm Q2 Results: Fintech major’s net profit at ₹21 crore; revenue jumps 24% — Details here

Baahubali The Epic box office collection day 5: Prabhas, Rana Daggubati film shows further dip, collects ₹27 crore

  • By admin
  • November 5, 2025
  • 2 views
Baahubali The Epic box office collection day 5: Prabhas, Rana Daggubati film shows further dip, collects ₹27 crore

Papa John’s stock falls on report Apollo withdrew take private deal

  • By admin
  • November 5, 2025
  • 2 views
Papa John’s stock falls on report Apollo withdrew take private deal

Gukesh held, Argentina's 'Messi of Chess' impresses

  • By admin
  • November 5, 2025
  • 3 views
Gukesh held, Argentina's 'Messi of Chess' impresses

Trump slams California redistricting vote, says votes under ‘review’ | World News

  • By admin
  • November 4, 2025
  • 2 views
Trump slams California redistricting vote, says votes under ‘review’ | World News

Harmanpreet Kaur’s World Cup win to be immortalized with a wax statue beside Sachin, Kohli, and Dhoni

  • By admin
  • November 4, 2025
  • 4 views
Harmanpreet Kaur’s World Cup win to be immortalized with a wax statue beside Sachin, Kohli, and Dhoni