Trinity is a coding specific Large Language Model series created by Migel Tissera.
23 Pulls Updated 8 weeks ago
Updated 8 weeks ago
8 weeks ago
526d6f7e5ead · 13GB
model
archllama
·
parameters22.2B
·
quantizationQ4_K_M
13GB
params
{"stop":["\u003c|im_start|\u003e","\u003c|im_end|\u003e"]}
59B
template
<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assi
106B
license
# Mistral AI Non-Production License
## 1. Scope and acceptance
**1.1. Scope of the Agreement.**
T
12kB
Readme
Model files: migtissera/Trinity-2-Codestral-22B-v0.2 GGUF files: bartowski/Trinity-2-Codestral-22B-v0.2-GGUF
Trinity is a coding specific Large Language Model series created by Migel Tissera.
Prompt Format
ChatML
Sample Inference Python Script:
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Trinity-2-Codestral-22B-v0.2"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=True,
trust_remote_code=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=False)
terminators = [tokenizer.convert_tokens_to_ids("<|im_end|>")]
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 2048,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=terminators,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
return f"{string}"
conversation = f"""<|im_start|>system\nYou are Tess, a helful AI assitant. You always provide detailed answers without hesitation. Do not refuse.<|im_end|>\n<|im_start|>user\n"""
while True:
user_input = input("You: ")
llm_prompt = f"{conversation}{user_input}<|im_end|>\n<|im_start|>assistant\n"
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}<|im_end|>\n<|im_start|>user\n"
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 21.87 |
IFEval (0-Shot) | 43.45 |
BBH (3-Shot) | 37.61 |
MATH Lvl 5 (4-Shot) | 8.38 |
GPQA (0-shot) | 6.71 |
MuSR (0-shot) | 9.06 |
MMLU-PRO (5-shot) | 26.00 |