A text-to-SQL LLM with EX score of 63.33 on the BIRD leaderboard (https://bird-bench.github.io/)
111 Pulls Updated 8 days ago
Readme
Introduction
OneSQL is a text-to-SQL model based on Qwen2.5-Coder. Its original 32B version has an EX score of 63.33 on the BIRD leaderboard.
Performances
Below is our self-evaluation EX score for each parameter-quantization configuration.
Quantization | 7B | 32B |
---|---|---|
Q2_K | 29.79 | 47.78 |
Q3_K_S | 36.31 | 50.26 |
Q3_K_M | 39.24 | 51.50 |
Q3_K_L | 40.14 | 51.24 |
Q4_1 | 39.06 | 46.54 |
Q4_K_S | 42.69 | 52.47 |
Q4_K_M | 43.95 | 53.79 |
Q5_0 | 43.84 | 50.23 |
Q5_1 | 41.00 | 48.36 |
Q5_K_S | 42.20 | 51.93 |
Q5_K_M | 42.07 | 50.66 |
Q6_K | 41.68 | 52.89 |
Q8_0 | 41.09 | 50.33 |
Quick start
To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by –. Make sure your prompt ends with SELECT in order for the model to finish the query for you. There is no need to set other parameters like temperature or max token limit.
PROMPT="CREATE TABLE students (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER,
grade TEXT
);
-- Find the three youngest students
SELECT "
ollama run onekq-ai/OneSQL-v0.1-Qwen:32B-Q4_K_M "$PROMPT"
The model response is the finished SQL query without SELECT
* FROM students ORDER BY age ASC LIMIT 3
Caveats
- The performance drop from the original model is due to quantization itself, and the lack of beam search support in llama.cpp framework. Use at your own discretion.
- The Q4_0 quantization suffers from repetitive output token, hence is not recommended for usage.