47 Downloads Updated 9 months ago
Updated 9 months ago
9 months ago
e30fb7b4d27b · 4.7GB
NeuralNet is a pioneering AI solutions provider that empowers businesses to harness the power of artificial intelligence
All the models have been quantized following the instructions provided by llama.cpp
. This is:
# obtain the official LLaMA model weights and place them in ./models
ls ./models
llama-2-7b tokenizer_checklist.chk tokenizer.model
# [Optional] for models using BPE tokenizers
ls ./models
<folder containing weights and tokenizer json> vocab.json
# [Optional] for PyTorch .bin models like Mistral-7B
ls ./models
<folder containing weights and tokenizer json>
# install Python dependencies
python3 -m pip install -r requirements.txt
# convert the model to ggml FP16 format
python3 convert-hf-to-gguf.py models/mymodel/
# quantize the model to 4-bits (using Q4_K_M method)
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
# update the gguf filetype to current version if older version is now unsupported
./llama-quantize ./models/mymodel/ggml-model-Q4_K_M.gguf ./models/mymodel/ggml-model-Q4_K_M-v2.gguf COPY
Original model: https://huggingface.co/infly/OpenCoder-8B-Instruct
<|im_start|>system
System message goes here<|im_end|>
<|im_start|>user
User message goes here<|im_end|>
<|im_start|>assistant
Assistant response goes here<|im_end|>
{{- if .Suffix }}<|im_start|><|fim_prefix|>{{ .Prompt }}<|fim_suffix|>{{ .Suffix }}<|fim_middle|>
{{- else if .Messages }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
<|im_start|>{{ .Role }}
{{ .Content }}
{{- if not $last }}<|im_end|>
{{ else if (ne .Role "assistant") }}<|im_end|>
<|im_start|>assistant
{{ end }}
{{- end }}
{{- end }}
Filename | Quant type | File Size | Description |
---|---|---|---|
OpenCoder-8B-Instruct-q4_K_M.gguf | q4_1 | 4.92GB | Good quality, recommended. |
ollama run NeuralNet/opencoder
Create a text plain file named Modelfile
(no extension needed)
FROM NeuralNet/opencoder
# set system
SYSTEM "You are an AI assistant expert in code generation created by NeuralNet, a company specialized in AI solutions. Your answers are clear and concise."
# sets the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.5
# sets the context window size to 4096, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 8192
# set generation
PARAMETER num_predict 2048
# Stop parameters
PARAMETER stop ### System:
PARAMETER stop ### User:
PARAMETER stop ### Assistant:
Then, after previously install ollama, just run:
ollama create opencoder -f opencoder
huggingface_hub[cli]
Ensure you have the necessary CLI tool installed by running:
pip install -U "huggingface_hub[cli]"
To download a specific model file, use the following command:
huggingface-cli download NeuralNet-Hub/OpenCoder-8B-Instruct-GGUF --include "opencoder-8b-Q4_K_M.gguf" --local-dir ./
This command downloads the specified model file and places it in the current directory (./).
For models exceeding 50GB, which are typically split into multiple files for easier download and management:
huggingface-cli download NeuralNet-Hub/OpenCoder-8B-Instruct-GGUF --include "opencoder-8b-Q8_0.gguf/*" --local-dir opencoder-8b-Q8_0
This command downloads all files in the specified directory and places them into the chosen local folder (opencoder-8b-Q8_0). You can choose to download everything in place or specify a new location for the downloaded files.
A comprehensive analysis with performance charts is provided by Artefact2 here.
By following these guidelines, you can make an informed decision on which file best suits your system and performance needs.
NeuralNet is a pioneering AI solutions provider that empowers businesses to harness the power of artificial intelligence
Website: https://neuralnet.solutions
Email: info[at]neuralnet.solutions