Wikimedia Deutschland announced a new database today, October 1, that allows AI models to access Wikipedia’s extensive knowledge base.
This project is called the Wikidata Embedding Project. It employs a vector-based semantic search—a method that helps computers understand the meaning and relationships between words—to explore over 120 million articles on Wikipedia and its sister sites.
The initiative improves the accessibility of data for natural language queries from large language models (LLMs) and introduces support for the Model Context Protocol (MCP), a standard enabling communication between AI systems and data sources.
The project was developed by Wikimedia’s German division in partnership with Jina, a neural search company, and DataStax, a real-time training data provider owned by IBM.
Although Wikidata has long provided machine-readable data from Wikimedia projects, existing tools were limited to keyword searches and the specialised query language SPARQL. The new approach is more compatible with retrieval-augmented generation (RAG) systems, which allow AI models to incorporate external data, helping developers build models based on verified Wikipedia content.
The data is organised to provide key semantic context. For instance, searching for “scientist” yields lists of notable nuclear scientists, those affiliated with Bell Labs, translations of the term in various languages, images of scientists at work, and related concepts such as “researcher” and “scholar.”
This publicly accessible database is available on Toolforge, and Wikidata will host a developer webinar on October 9. The initiative comes at a time when AI developers are seeking high-quality data sources to improve model training.
Story continues below this ad
As training systems become more sophisticated and operate within complex environments, they require carefully curated data for optimal performance. Reliable data is especially vital for applications demanding high accuracy; despite some scepticism towards Wikipedia, its information tends to be far more factual than broad datasets like the Common Crawl, which scrape diverse web pages from the internet.
The demand for quality data can be costly for AI companies, highlighted by Anthropic’s $1.5 billion settlement to resolve a lawsuit from authors whose works featured in training data.
Wikidata AI project manager Philippe Saadé stated that the project remains independent of major AI labs, emphasising that the launch of the Embedding Project demonstrates that powerful AI can be open and collaborative, rather than monopolised by large corporations.
© IE Online Media Services Pvt Ltd






