88 2 weeks ago

SmallCoder is a compact reasoning-focused coding model, fine-tuned from DeepSeek-R1 1.5B using a code dataset that includes step-by-step reasoning.

1.5b
ollama run DedeProGames/smallcoder:1.5b

Details

2 weeks ago

3bc49891ec47 · 1.1GB ·

qwen2
·
1.78B
·
Q4_K_M
MIT License Copyright (c) 2025 Agentica Permission is hereby granted, free of charge, to any person
{ "temperature": 0.6, "top_p": 0.95 }
{{- if .System }}{{ .System }}{{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice

Readme

upscalemedia-transformed (1).png

Welcome to SmallCoder, a compact reasoning-focused coding model fine-tuned from DeepSeek-R1 1.5B using a code dataset that includes step-by-step reasoning.
It is designed to deliver strong coding performance for its parameter size, emphasizing structured reasoning and correctness during problem solving.


Overview

SmallCoder is a compact reasoning-focused coding model, fine-tuned from DeepSeek-R1 1.5B using a code dataset that includes step-by-step reasoning.
It targets strong code generation for its parameter size, prioritizing correctness-oriented reasoning during coding tasks.

The training approach focuses on exposing the model to structured, multi-step solutions so it can better handle logic-heavy programming tasks while remaining lightweight and efficient.


Benchmark Results

Model LCB (v5) (8/1/24–2/1/25) Codeforces Rating Codeforces Percentile HumanEval+
SmallCoder 25.1 963 28.5 73.0
Deepseek-R1-Distill-Qwen-1.5B 16.9 615 1.9 58.3

These results highlight improvements in reasoning-driven coding performance while maintaining a compact model size.

SmallCoder aims to balance efficiency and reasoning depth, making it suitable for local deployment, experimentation, and lightweight coding assistance workflows.