2 Downloads Updated 3 weeks ago
Name
5 models
thau:latest
2.2GB · 2K context window · Text · 3 weeks ago
thau:contable
2.2GB · 2K context window · Text · 3 weeks ago
thau:reasoning
2.2GB · 2K context window · Text · 3 weeks ago
thau:agi-v2
2.2GB · 2K context window · Text · 3 weeks ago
thau:agi-v3
2.2GB · 2K context window · Text · 3 weeks ago
A lightweight, self-learning language model with tool calling capabilities.
Dedicated to Thomas and Aurora - watching you learn and grow inspired this project
THAU was born from a simple question: “Can an AI learn progressively, like a child does?”
As a developer and father, I (Luis Perez) was fascinated by how my children Thomas and Aurora learn - starting with basic concepts and gradually building more complex understanding. This inspired me to create a framework that mimics this cognitive progression in AI.
This entire project was developed in collaboration with Claude (Anthropic’s AI assistant). From architecture decisions to code implementation, debugging, and documentation - Claude has been my pair programming partner throughout this journey. It’s a testament to what human-AI collaboration can achieve.
THAU (Thinking, Helpful, Autonomous, Understanding) is a language model built on TinyLlama-1.1B, fine-tuned using a unique “cognitive age” progression system. It supports native tool calling and runs efficiently on consumer hardware.
# Install
ollama pull luepow/thau
# Run
ollama run luepow/thau
# With prompt
ollama run luepow/thau "Hola, que puedes hacer?"
THAU supports structured tool calling:
<tool_call>{"name": "tool_name", "arguments": {"param": "value"}}</tool_call>
| Tool | Description |
|---|---|
get_current_time |
Get current date and time |
web_search |
Search the web |
execute_python |
Run Python code |
generate_image |
Generate images from prompts |
User: What time is it?
THAU:
<tool_call>{"name": "get_current_time", "arguments": {}}</tool_call>
curl http://localhost:11434/api/generate -d '{
"model": "luepow/thau",
"prompt": "Explica que es machine learning",
"stream": false
}'
curl http://localhost:11434/api/chat -d '{
"model": "luepow/thau",
"messages": [{"role": "user", "content": "Hola!"}],
"stream": false
}'
import requests
response = requests.post('http://localhost:11434/api/generate', json={
'model': 'luepow/thau',
'prompt': 'Hola, como estas?',
'stream': False
})
print(response.json()['response'])
| Parameter | Default | Description |
|---|---|---|
temperature |
0.7 | Randomness (0-2) |
top_p |
0.9 | Nucleus sampling |
top_k |
40 | Top-k sampling |
repeat_penalty |
1.1 | Repetition penalty |
num_ctx |
2048 | Context window |
ollama run luepow/thau --temperature 0.5 --num-ctx 4096
THAU uses progressive “cognitive age” training:
| Age | Focus |
|---|---|
| 0-3 | Basic language, patterns |
| 4-6 | Grammar, vocabulary |
| 7-9 | Reasoning, logic |
| 10-12 | Programming, advanced topics |
| 13-15 | Specialization, tool use |
| Component | Value |
|---|---|
| Base Model | TinyLlama-1.1B-Chat |
| Parameters | ~1.1B |
| Hidden Size | 2048 |
| Layers | 22 |
| Vocabulary | 32,000 |
| Format | GGUF F16 |
Luis Perez - Software developer, father, and AI enthusiast.
If you find THAU interesting or useful, consider supporting its development:
Your support helps cover compute costs and keeps this project alive!
Apache 2.0
THAU - Built with curiosity, love, and a lot of help from Claude
“The best way to learn is to build something”