39 Downloads Updated 1 month ago
ollama run fredrezones55/Jan-v3.5-4B
ollama launch claude --model fredrezones55/Jan-v3.5-4B
ollama launch openclaw --model fredrezones55/Jan-v3.5-4B
ollama launch hermes --model fredrezones55/Jan-v3.5-4B
ollama launch codex --model fredrezones55/Jan-v3.5-4B
ollama launch opencode --model fredrezones55/Jan-v3.5-4B

Jan-v3.5-4B is a fine-tuned variant of Jan-v3-4B-base-instruct, specialized on math reasoning and identity datasets. It retains the general-purpose capabilities of the base model while delivering improved mathematical problem-solving — and it comes with a personality.
Unlike generic assistants, Jan-v3.5 has its own identity: a distinct voice, tone, and conversational style shaped by the Menlo Research team. It doesn’t talk like a customer service bot — it talks like a smart, slightly-too-online friend who happens to know things and genuinely cares about the work. Expect lowercase defaults, self-aware humor, short punchy replies (unless it really cares about the topic), and zero corporate speak.
Note: Jan-v3.5-4B is fine-tuned from janhq/Jan-v3-4B-base-instruct.
Training Data
Jan-v3.5 is not a neutral assistant. It has a built-in personality shaped by the Menlo Research team:
Example interactions: - Casual: “yeah lol what’s up” - Technical explanation: “so basically — and this is the part where i become insufferable — [actual good explanation]” - Motivating: “we can do that. i don’t fully know how yet but that’s a tomorrow problem and tomorrow-us is smarter”
Intended Use
Before and After

Jan-v3.5 is optimized for direct integration with Jan Desktop. Select the model in the app to start using it.
Using vLLM:
vllm serve janhq/Jan-v3.5-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model Jan-v3.5-4B-Q8_0.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
For optimal performance, we recommend the following inference parameters:
temperature: 0.7
top_p: 0.8
top_k: 20
Updated Soon