8B

264 Pulls Updated 4 months ago

Readme

FROM quantized.bin
# Set prompt template with system, user and assistant roles
TEMPLATE """{{ .System }}<|end_of_turn|>LLAMA3 Correct User: {{ .Prompt}}<|end_of_turn|>LLAMA3 Correct Assistant:"""
PARAMETER temperature 0
# sets the context window size to 16384, this controls how many tokens the LLM can use as context to generate the next token
PARAMETER num_ctx 16384
# sets a custom system message to specify the behavior of the chat assistant
SYSTEM You are the best assistant ever.
PARAMETER stop <|endoftext|>
PARAMETER stop <|end_of_turn|>
PARAMETER stop Human:
PARAMETER stop Assistant: