35 3 weeks ago

Jan-v3 is a compact 4B-parameter model that leverages distillation from a larger teacher to maintain strong general performance and broad applicability while avoiding typical capacity limitations.

tools
ollama run fredrezones55/Jan-v3

Applications

Claude Code
Claude Code ollama launch claude --model fredrezones55/Jan-v3
Codex
Codex ollama launch codex --model fredrezones55/Jan-v3
OpenCode
OpenCode ollama launch opencode --model fredrezones55/Jan-v3
OpenClaw
OpenClaw ollama launch openclaw --model fredrezones55/Jan-v3

Models

View all →

Readme

Hugging face Source

Jan-v3-4B-base-instruct: a 4B baseline model for fine-tuning

image

Overview

Jan-v3-4B-base-instruct is a 4B-parameter model obtained via post-training distillation from a larger teacher, transferring capabilities while preserving general-purpose performance on standard benchmarks. The result is a compact, ownable base that is straightforward to fine-tune, broadly applicable and minimizing the usual capacity–capability trade-offs.

Building on this base, Jan-Code, a code-tuned variant, will be released soon.

Model Overview

This repo contains the BF16 version of Jan-v3-4B-base-instruct, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 4B in total - Number of Layers: 36 - Number of Attention Heads (GQA): 32 for Q and 8 for KV - Context Length: 262,144 natively.

Intended Use

  • A better small base for downstream work: improved instruction following out of the box, strong starting point for fine-tuning, and effective lightweight coding assistance.

Performance

image

Quick Start

Integration with Jan Apps

Jan-v3 demo is hosted on Jan Browser at chat.jan.ai. It is also optimized for direct integration with Jan Desktop, select the model in the app to start using it.

Local Deployment

Using vLLM:

vllm serve janhq/Jan-v3-4B-base-instruct \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    

Using llama.cpp:

llama-server --model Jan-v3-4B-base-instruct-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift

Recommended Parameters

For optimal performance in agentic and general tasks, we recommend the following inference parameters:

temperature: 0.7
top_p: 0.8
top_k: 20

🤝 Community & Support

📄 Citation

Updated Soon