DeepSeek R1 Distill Qwen 1.5B is a distilled large language model based on Qwen 2.5 Math 1.5B, using outputs from DeepSeek R1. It's a very small and efficient model which outperforms GPT 4o 0513 on Math Benchmarks.
Other benchmark results include:
AIME 2024 pass@1: 28.9
AIME 2024 cons@64: 52.7
MATH-500 pass@1: 83.9
The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
AI Analyze
Providers for R1 Distill Qwen 1.5B
OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.