628 Downloads Updated 8 months ago
Updated 8 months ago
8 months ago
57aedbc513c2 · 4.1GB ·
Lucie-7B-Instruct is a fine-tuned version of Lucie-7B, an open-source, multilingual causal language model created by OpenLLM-France.
Lucie-7B-Instruct is fine-tuned on synthetic instructions produced by ChatGPT and Gemma and a small set of customized prompts about OpenLLM and Lucie. It is optimized for generation of French text. Note that it has not been trained for code generation or optimized for math. Such capacities can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.
While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct maintains the capacity of the base model to handle 32K-size context windows.
Lucie-7B-Instruct is trained on the following datasets: * Alpaca-cleaned (English; 51604 samples) * Alpaca-cleaned-fr (French; 51655 samples) * Magpie-Gemma (English; 195167 samples) * Wildchat (French subset; 26436 samples) * Hard-coded prompts concerning OpenLLM and Lucie (based on allenai/tulu-3-hard-coded-10x) * French: openllm_french.jsonl (24x10 samples) * English: openllm_english.jsonl (24x10 samples)
Lucie-7B-Instruct was trained on the chat template from Llama 3.1 with the sole difference that <|begin_of_text|> is replaced with <s>. The resulting template:
<s><|start_header_id|>system<|end_header_id|>
{SYSTEM}<|eot_id|><|start_header_id|>user<|end_header_id|>
{INPUT}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{OUTPUT}<|eot_id|>
An example:
<s><|start_header_id|>system<|end_header_id|>
You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>
Give me three tips for staying in shape.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
1. Eat a balanced diet and be sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule.<|eot_id|>
The model architecture and hyperparameters are the same as for Lucie-7B during the annealing phase with the following exceptions: * context length: 4096* * batch size: 1024 * max learning rate: 3e-5 * min learning rate: 3e-6
*As noted above, while Lucie-7B-Instruct is trained on sequences of 4096 tokens, it maintains the capacity of the base model, Lucie-7B, to handle context sizes of up to 32K tokens.
The following instructions refer to running Lucie through huggingface on your local Ollama instance. To use Ollama directly, you can simply run ollama run OpenLLM-France/Lucie-7B-Instruct
Modelfile, adpating if necessary the path to the GGUF file (line starting with FROM).ollama create -f Modelfile Lucieollama run Lucie/clear”/bye”.Useful for debug: * How to print input requests and output responses in Ollama server? * Documentation on Modelfile * Examples: Ollama model library * Llama 3 example: https://ollama.com/library/llama3.1 * Add GUI : https://docs.openwebui.com/
Use the following command to deploy the model,
replacing INSERT_YOUR_HF_TOKEN with your Hugging Face Hub token.
docker run --runtime nvidia --gpus=all \
--env "HUGGING_FACE_HUB_TOKEN=INSERT_YOUR_HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model OpenLLM-France/Lucie-7B-Instruct
To test the deployed model, use the OpenAI Python client as follows:
from openai import OpenAI
# Initialize the client
client = OpenAI(base_url='http://localhost:8000/v1', api_key='empty')
# Define the input content
content = "Hello Lucie"
# Generate a response
chat_response = client.chat.completions.create(
model="OpenLLM-France/Lucie-7B-Instruct",
messages=[
{"role": "user", "content": content}
],
)
print(chat_response.choices[0].message.content)
When using the Lucie-7B-Instruct model, please cite the following paper:
✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Evan Dufraisse, Yaya Sy, Pierre-Carl Langlais, Anastasia Stasenko, Laura Rivière, Christophe Cerisara, Jean-Pierre Lorré (2025) Lucie-7B LLM and its training dataset
@misc{openllm2023claire,
title={The Lucie-7B LLM and the Lucie Training Dataset:
open resources for multilingual language generation},
author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Evan Dufraisse and Yaya Sy and Pierre-Carl Langlais and Anastasia Stasenko and Laura Rivière and Christophe Cerisara and Jean-Pierre Lorré},
year={2025},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444). We gratefully acknowledge support from GENCI and IDRIS and from Pierre-François Lavallée (IDRIS) and Stephane Requena (GENCI) in particular.
Lucie-7B was created by members of LINAGORA and the OpenLLM-France community, including in alphabetical order: Olivier Gouvert (LINAGORA), Ismaïl Harrando (LINAGORA/SciencesPo), Julie Hunter (LINAGORA), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), and Laura Rivière (LINAGORA).
We thank Clément Bénesse (Opsci), Christophe Cerisara (LORIA), Émile Hazard (Opsci), Evan Dufraisse (CEA), Guokan Shang (MBZUAI), Joël Gombin (Opsci), Jordan Ricker (Opsci), and Olivier Ferret (CEA) for their helpful input.
Finally, we thank the entire OpenLLM-France community, whose members have helped in diverse ways.
contact@openllm-france.fr