latest
4.9GB
Llamas Fine-Tuned for Function Calling
8B
128 Pulls Updated 4 months ago
Updated 4 months ago
4 months ago
a6ce7af11e58 · 4.9GB
model
archllama
·
parameters8.03B
·
quantizationQ4_K_M
4.9GB
params
{"stop":["<|eot_id|>"]}
34B
Readme
LLama3Callama
This README provides instructions on how to call a fine-tuned Language Model (LLM) that supports tool-calls using a helper function in Python.
For detailed information check https://github.com/unclecode/callama.
Prompt Format
To call the LLM, the user should pass the prompt in the following format:
"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a helpful assistant with access to the following functions. Use them if required -
{
"name": "send_email",
"description": "Send an email for the given recipient and message",
"parameters": {
"type": "object",
"properties": {
"recipient": {
"type": "string",
"description": "The email address of the recipient"
},
"message": {
"type": "string",
"description": "The message to send"
}
},
"required": [
"recipient",
"message"
]
}
}
<|start_header_id|>user<|end_header_id|>
Hi, send an email to tom@kidocode.com and ask him to join our weekend party?
<|start_header_id|>assistant<|end_header_id|>
<functioncall> """
Helper Function
To simplify the process, you can create a helper function like the following:
import json
from jinja2 import Template
from typing import List, Dict
lama3_template = \
"{% if messages[0]['role'] == 'system' %}"\
"<|start_header_id|>system<|end_header_id|>\n\n"\
"{{ messages[0]['content'] }}\n"\
"{{ tools }}\n"\
"{% endif %}"\
"{% for message in messages %}"\
"{% if message['role'] == 'user' %}"\
"<|start_header_id|>user<|end_header_id|>\n\n"\
"{{ message['content'] }}\n"\
"{% elif message['role'] == 'tool' %}"\
"<|start_header_id|>assistant<|end_header_id|>\n\n"\
"<functioncall> {{ message['content'] }}<|eot_id|>\n"\
"{% elif message['role'] == 'tool_response' %}"\
"<|start_header_id|>assistant<|end_header_id|>\n\n"\
"{{ message['content'] }}\n"\
"{% elif message['role'] == 'assistant' %}"\
"<|start_header_id|>assistant<|end_header_id|>\n\n"\
"{{ message['content'] }}<|eot_id|>\n"\
"{% endif %}"\
"{% endfor %}"\
"{% if tool_call %}"\
"<|start_header_id|>assistant<|end_header_id|>\n\n<functioncall> "\
"{% endif %}"
def render(messages: List[Dict[str, str]], tools: List[Dict[str, str]], tool_call=False) -> str:
if messages[0]['role'] != 'system':
messages.insert(0, {'role': 'system', 'content': ''})
tools_json = []
for toll in tools:
tools_json.append(json.dumps(toll, indent = 4))
tools_json = '\n'.join(tools_json)
template = Template(lama3_template)
rendered_string = template.render(messages=messages, tools=tools_json, tool_call=tool_call)
return rendered_string
Usage
You can use the helper function like this:
messages = [
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required -"},
{'role': 'user', 'content': "Hi, send an email to tom@kidocode.com and ask him to join our weekend party?"},
]
tools = [
{
"name": "send_email",
"description": "Send an email for the given recipient and message",
"parameters": {
"type": "object",
"properties": {
"recipient": {
"type": "string",
"description": "The email address of the recipient"
},
"message": {
"type": "string",
"description": "The message to send"
}
},
"required": [
"recipient",
"message"
]
}
}
]
rendered_string = render(messages, tools, tool_call=True)
This way, you have the rendered prompt to pass to the model.