Experimental fine-tune of mistral 7B for SWIFT MT103 messages
tools
4 Pulls Updated 2 months ago
Updated 2 months ago
2 months ago
420ac9172efa · 7.7GB
model
archllama
·
parameters7.25B
·
quantizationQ8_0
7.7GB
params
{
"stop": [
"[INST]",
"[/INST]"
]
}
30B
template
{{- if .Messages }}
{{- range $index, $_ := .Messages }}
{{- if eq .Role "user" }}
{{- if and (eq (l
801B
license
Apache License
Version 2.0, January 2004
11kB
Readme
This is an experimental lora-tuned Mistral 7B model trained to answer questions about SWIFT MT103 fields.