AGI-0 Labs' Reflection Model : A model created by agi0labs based on mattshumer/Reflection-Llama-3.1-70B, offering similar results in language understanding and generation tasks.

Tools 8B

835 Pulls Updated 11 days ago

11 days ago

8590a2e45057 · 4.7GB

model
llama
·
8.03B
·
Q4_0
params
{"stop":["<|start_header_id|>","<|end_header_id|>","<|eot_id|>"]}
template
{{ if .Messages }} {{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|> {{- if .System }} {{ .System }} {{- end }} {{- if .Tools }} You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the orginal use question. {{- end }}<|eot_id|> {{- end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|> {{- if and $.Tools $last }} Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables. {{ $.Tools }} {{- end }} {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|> {{- if .ToolCalls }} {{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }} {{- else }} {{ .Content }}{{ if not $last }}<|eot_id|>{{ end }} {{- end }} {{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|> {{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|> {{ end }} {{- end }} {{- end }} {{- else }} {{- if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}

Readme

Here is the rewritten text:

AGI-0 Labs’ Reflection Model (Created by agi0labs)

This model was created over 22 days ago using a similar approach to mattshumer/Reflection-Llama-3.1-70B (see commit date).

Due to limited resources, I couldn’t run benchmarks, so I set the project aside. However, after the release of Reflection-Llama-3.1-70B, I decided to publish it anyway since provides very similar results (examples of output below).

This research lab is run by one person, and the model was trained for free using a Colab T4 GPU. If you’d like to support with donations or GPUs, feel free to contact me in the community section.

Also, if you’re able to run benchmarks or other tests, please share the results.

Will release the dataset soon.

X: https://x.com/agi0labs

Consider sharing this model if you find it interesting.

Examples (model 8bit quantized in these examples):
Test 1:
Prompt:
How many r in raspberry, what is the number of letters in the capital of italy, which number is bigger 9.9099 or 9.99

Response:

(Note: The rest of the response remains the same)