46 Downloads Updated 3 days ago
A specialized local AI model optimized for software engineering, technical documentation, AI/ML workflows, and general problem-solving. Powered by a 20B parameter open model and configured for reliable, developer-focused workflows directly on your machine.
NeuroEquality/neuralquantum-coder:latest
gpt-oss:20b
These parameters are built in by default. You can override them at runtime via the Ollama API:
temperature
: 0.7top_p
: 0.9top_k
: 40repeat_penalty
: 1.1num_ctx
: 8192num_predict
: 2048NeuralQuantum-Coder is instructed to: - Deliver clear, actionable, technically accurate solutions - Include practical examples when useful - Factor in performance, security, and best practices - Suggest alternatives when relevant - Use structured formatting for complex answers - Be concise yet comprehensive
ollama run NeuroEquality/neuralquantum-coder:latest
Write a unit test for a FastAPI endpoint with pytest and httpx. Explain edge cases.
You can interact with the model via Ollama’s HTTP API.
Generate endpoint
curl http://localhost:11434/api/generate \
-H 'Content-Type: application/json' \
-d '{
"model": "NeuroEquality/neuralquantum-coder:latest",
"prompt": "Create a Go HTTP server with graceful shutdown.",
"options": {"temperature": 0.7, "num_ctx": 8192}
}'
Chat endpoint
curl http://localhost:11434/api/chat \
-H 'Content-Type: application/json' \
-d '{
"model": "NeuroEquality/neuralquantum-coder:latest",
"messages": [
{"role": "system", "content": "You are a helpful software engineering assistant."},
{"role": "user", "content": "Propose a microservice layout with observability."}
],
"options": {"top_p": 0.9, "top_k": 40}
}'
Any default can be overridden using the options
field. Examples:
- Increase context window: { "num_ctx": 16384 }
- Constrain output length: { "num_predict": 512 }
- Explore creative outputs: { "temperature": 0.9 }
Note: Effective maximum context depends on the base model and system resources.
You can create or adjust the model locally with a Modelfile:
# In a directory containing a Modelfile
ollama create neuralquantum-coder -f Modelfile
# Run by name
ollama run neuralquantum-coder
gpt-oss:20b
NeuroEquality/neuralquantum-coder:latest