44 Downloads Updated 1 year ago
Occiglot-7B-EU5-Instruct is a the instruct version of occiglot-7b-it-en, a generative language model with 7B parameters supporting Italian and English and trained by the Occiglot Research Collective. It was trained on 160M tokens of additional multilingual and code instructions. Note that the model was not safety aligned and might generate problematic outputs.
This is the first release of an ongoing open research project for multilingual language models. If you want to train a model for your own language or are working on evaluations, please contact us or join our Discord server. We are open for collaborations!
The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction. Since the generation relies on some randomness, we set a seed for reproducibility:
>>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed
>>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-eu5-instruct")
>>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-eu5-instruct') # You may want to use bfloat16 and/or move to GPU here
>>> set_seed(42)
>>> messages = [
>>> {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'},
>>> {"role": "user", "content": "chi è il primo ministro italiano?"},
>>> ]
>>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',)
>>> set_seed(42)
>>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,)
>>> tokenizer.decode(out[0][len(tokenized_chat[0]):])
'Il primo ministro italiano è attualmente Giorgia Meloni, presidente di Fratelli d'Italia, un partito politico di estrema destra.'
The training data was split evenly amongst the 5 languages based on the total number of tokens. We would like to thank Disco Research and Björn Plüster for making their dataset available to us.
English and Code - Open-Hermes-2B
Italian - Quora-IT-Baize - Stackoverflow-IT-Vaize - Camoscio - OASST-2 (Italian subset) - Aya-Dataset (Italian subset)
Tokenizer is unchanged from Mistral-7B-v0.1.
Preliminary evaluation results can be found below. Please note that the non-English results are based on partially machine-translated datasets and English prompts (Belebele and Okapi framework) and thus should be interpreted with caution, e.g., biased towards English model performance. Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian.
The pre-trained model training was supported by a compute grant at the 42 supercomputer which is a central component in the development of hessian AI, the AI Innovation Lab (funded by the Hessian Ministry of Higher Education, Research and the Art (HMWK) & the Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)) and the AI Service Centers (funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK)). The curation of the training data is partially funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the project OpenGPT-X (project no. 68GX21007D).