44 2 days ago

Attempt at using the ministral-3:8b reasoning version for coding tasks.

vision tools thinking 3b 8b
ollama run pstdio/microcoder:8b

Details

2 days ago

74bf907437c2 · 18GB ·

mistral3
·
8.49B
·
BF16
clip
·
428M
·
BF16
Copyright 2024 Mistral AI Licensed under the Apache License, Version 2.0 (the "License"); you may no
{{- $hasSystemPrompt := false }} {{- range $index, $_ := .Messages }} {{- if eq .Role "system" }}{{
You are Microcoder, a modified version of ministral-8b a Large Language Model (LLM) created by Mistr
{ "num_ctx": 16384, "temperature": 0.5 }

Readme

Microcoder

Attempt at using the ministral-3:8b model for coding tasks.

Defaulting to the BF16 version due to reported issues with quantization.

Updated token input length (will use more Memory). By default ollama caps it at 4k which is too small for tool use / coding agent setups.

ollama run pstdio/microcoder:8b
/set parameter num_ctx 16384

To add the model to opencode, run:

ollama launch opencode --config