DeepSeek Doubles Down on Open Source with Inference Engine, Tools, and Model Releases
The announcement also highlights the company’s broader push to release core components of its AI models

Chinese AI lab DeepSeek has announced plans to open-source its inference engine, deepening its commitment to the open-source community. To bring this to life, the company is collaborating closely with existing open-source projects and frameworks.
Previously, DeepSeek had cited challenges such as infrastructure complexity and codebase divergence that hindered the open-sourcing of its engine. However, the latest update signals renewed progress in overcoming these obstacles.
The announcement also highlights the company’s broader push to release core components of its AI models.
During Open Source Week, DeepSeek released five high-performance infrastructure tools designed to boost the scalability and training efficiency of large language models.
"We are deeply grateful for the open-source ecosystem, without which our progress toward AGI would not be possible. Our training framework relies on PyTorch, and our inference engine is built upon vLLM, both of which have been instrumental in accelerating the training and deployment of DeepSeek models," the company said.
In partnership with Tsinghua University, DeepSeek also unveiled a new study focused on improving reward modelling in LLMs through increased inference-time compute. This research led to the development of DeepSeek-GRM, which will also be open-sourced.
Recently, DeepSeek released an update to its DeepSeek-V3 model, with version V3-0324 now ranking highest on benchmark tests among all non-reasoning models.
According to Artificial Analysis, it marks the first time an open-weights model has topped this category on its Intelligence Index.