Falcon 3 10b for tool usage and function call
tools
40 Pulls Updated 3 months ago
Updated 3 months ago
3 months ago
161199638af2 · 6.3GB
model
archllama
·
parameters10.3B
·
quantizationQ4_K_M
6.3GB
params
{
"stop": [
"<|system|>",
"<|user|>",
"<|end|>",
"<|assistant|>"
101B
template
{{ if .Messages }}
{{- if or .System .Tools }}system
{{ .System }}
{{- if .Tools }}
# Tools
You ar
1.2kB
license
Falcon 3 TII Falcon License
December 2024
FalconLLM.tii.ae
Introductory note
This license is, in
13kB
Readme
Readme
Falcon3 represents TII’s latest advancement in efficient language models under 10B parameters, focused on enhancing science, math, and code capabilities while maintaining training efficiency.
Key Features
- Sizes: 10B
- Depth up-scaling technique used to create 10B model from 7B
- Knowledge distillation for smaller models (1B, 3B)
Performance Highlights
falcon3:10b achieves SOTA in under-13B category
Extended context length up to 32K tokens
Tool usage in action
Test code:
import ollama
client = ollama.Client(host='http://localhost:11434')
response = client.chat(
model='falcon3-tools',
messages=[{'role': 'user', 'content':
'Get me a tool to use for hack'}],
# provide a tool selector function
tools=[{
'type': 'function',
'function': {
'name': 'get_me_a_tool',
'description': 'Get a tool to use',
'parameters': {
'type': 'object',
'properties': {
'tool': {
'type': 'string',
'description': 'The name of the tool',
},
},
'required': ['tool'],
},
},
},
],
)
print(response['message']['tool_calls'])