DeepSeek R1: Open Reasoning Models

ai machine-learning open-source

The reasoning model race just got interesting. DeepSeek released R1, an open-source model that competes with OpenAI’s o1 on logic and reasoning tasks. For the first time, you can run a reasoning model locally.

What is DeepSeek R1?

R1 is a reasoning model—one that “thinks” before responding. Unlike standard LLMs that generate tokens immediately, reasoning models spend compute time working through problems step by step.

Standard LLM: Question → Answer (fast, sometimes wrong)
Reasoning Model: Question → Extended thinking → Answer (slower, more accurate)

DeepSeek open-sourced R1 with weights, training methodology, and distilled versions for local use.

Why This Matters

Open Source Catches Up

Until now, reasoning models were proprietary:

R1 changes this. Download, run locally, fine-tune for your use case.

The Distillation Approach

DeepSeek didn’t just release the full model—they released distilled versions:

ModelSizeUse Case
R1-Full671B MoEMaximum capability
R1-32B32BServer deployment
R1-14B14BLocal/edge
R1-7B7BMobile/embedded
R1-1.5B1.5BUltra-light

The smaller models retain surprising reasoning capability through distillation from the full model.

Benchmarks

On reasoning benchmarks:

BenchmarkGPT-4oo1-previewR1-Full
MATH-50074.6%85.5%97.3%
AIME 20249.3%44.6%79.8%
GPQA Diamond53.6%75.2%71.5%
Codeforces23.0%62.0%96.3%

R1 matches or exceeds o1 on many tasks—and it’s open.

How to Run R1 Locally

With Ollama

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Run R1 distilled versions
ollama run deepseek-r1:7b
ollama run deepseek-r1:14b
ollama run deepseek-r1:32b

With vLLM

from vllm import LLM, SamplingParams

llm = LLM(model="deepseek-ai/DeepSeek-R1-Distill-Qwen-14B")
sampling_params = SamplingParams(temperature=0.6, max_tokens=4096)

prompts = ["Solve step by step: What is 23 * 47?"]
outputs = llm.generate(prompts, sampling_params)

for output in outputs:
    print(output.outputs[0].text)

The Thinking Process

R1 shows its reasoning in <think> tags:

User: What's the probability of getting at least one 6 in four dice rolls?

R1: <think>
Let me work through this step by step.

The probability of NOT getting a 6 on a single roll is 5/6.
The probability of NOT getting a 6 on all four rolls is (5/6)^4.
(5/6)^4 = 625/1296 ≈ 0.482

Therefore, the probability of getting at least one 6 is:
1 - 625/1296 = 671/1296 ≈ 0.518
</think>

The probability of rolling at least one 6 in four dice rolls is approximately 51.8% (or exactly 671/1296).

The thinking is visible, debuggable, and interpretable.

Use Cases

Code Generation

Reasoning models excel at complex coding:

Math and Science

Step-by-step derivations, proofs, and calculations.

Planning and Analysis

Breaking down complex problems into actionable steps.

Document Analysis

Understanding and synthesizing long documents with logical consistency.

Limitations

Speed

Reasoning takes time. R1 is slower than non-reasoning models for simple queries.

Context Usage

Extended thinking consumes context window. Plan accordingly.

Overkill for Simple Tasks

“What’s the capital of France?” doesn’t need a reasoning model.

Integration with Existing Tools

LangChain

from langchain_community.llms import Ollama

llm = Ollama(model="deepseek-r1:14b")
response = llm.invoke("Explain the CAP theorem and its implications for distributed databases")

API Deployment

# FastAPI endpoint
from fastapi import FastAPI
from vllm import LLM, SamplingParams

app = FastAPI()
llm = LLM(model="deepseek-ai/DeepSeek-R1-Distill-Qwen-14B")

@app.post("/reason")
async def reason(prompt: str):
    outputs = llm.generate([prompt], SamplingParams(temperature=0.6))
    return {"response": outputs[0].outputs[0].text}

What This Means for 2025

R1 democratizes reasoning models:

The gap between open and closed AI just narrowed significantly.

Final Thoughts

DeepSeek R1 is a milestone. Open-source reasoning models enable a new class of applications—ones that think before they speak.

Download it. Run it locally. Build with it. The future of AI reasoning is now open.


Think before you speak. Even if you’re an AI.

All posts