21 Downloads Updated 2 weeks ago
Neutrino-Instruct is a 7B parameter instruction-tuned LLM developed by Fardeen NB.
It is designed for conversational AI, multi-step reasoning, and instruction-following tasks, fine-tuned to maintain coherent and contextual dialogue across multiple turns.
llama.cpp
and Ollama
)# Clone and build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && make
# Run a single prompt
./main -m ./neutrino-instruct.gguf -p "Hello, who are you?"
# Run in interactive mode
./main -m ./neutrino-instruct.gguf -i -p "Let's chat."
# Control output length
./main -m ./neutrino-instruct.gguf -n 256 -p "Write a poem about stars."
# Change creativity (temperature)
./main -m ./neutrino-instruct.gguf --temp 0.7 -p "Explain quantum computing simply."
# Enable GPU acceleration (if compiled with CUDA/Metal)
./main -m ./neutrino-instruct.gguf --gpu-layers 50 -p "Summarize this article."
ollama run fardeen0424/neutrino
llama-cpp-python
)from llama_cpp import Llama
# Load the Neutrino-Instruct model
llm = Llama(model_path="./neutrino-instruct.gguf")
# Run inference
response = llm("Who are you?")
print(response["choices"][0]["text"])
# Stream output tokens
for token in llm("Tell me a story about Neutrino:", stream=True):
print(token["choices"][0]["text"], end="", flush=True)
CPU-only: 32โ64GB RAM recommended (runs on modern laptops, slower inference).
GPU acceleration:
โ ๏ธ Out of Scope: Use in critical decision-making, legal, or medical contexts.
Fardeen NB
If you use Neutrino in your research or projects, please cite:
@misc{fardeennb2025neutrino,
title = {Neutrino-Instruct: A 7B Instruction-Tuned Conversational Model},
author = {Fardeen NB},
year = {2025},
howpublished = {Hugging Face},
url = {https://huggingface.co/neuralcrew/neutrino-instruct}
}