A Fine Tuned Medical Chatbot Trained on Llama 3.1

tools

121 2 months ago

Readme

MediTalk - Medical Assistant Model

MediTalk is a medical assistant model designed to assist users by providing general health-related information based on research and expert knowledge. Trained using LLAMA 3.1, this model responds to queries with empathy, clarity, and professionalism, while also ensuring sensitive or inappropriate questions are addressed with polite negations. The model has been fine-tuned using the lavita/ChatDoctor-HealthCareMagic-100k dataset from Hugging Face, making it suitable for medical-related works.

The model is available for use on Ollama and Hugging Face, and can be interacted with similarly to GPT models.

Features

  • General Medical Information: MediTalk provides clear and concise responses to common health-related queries.
  • Polite Negation for Inappropriate Questions: If a user asks an awkward or inappropriate question, MediTalk responds with a polite negation like “I’m afraid I can’t answer that. Please ask something else related to health.”
  • Fine-Tuned for Medical Content: The model is fine-tuned using the lavita/ChatDoctor-HealthCareMagic-100k dataset for more relevant medical responses.
  • Maximum Tokens per Response: The model provides responses with a maximum of 512 tokens.

Setup and Installation

Prerequisites

  • Ollama: Ensure you have Ollama installed on your system.
  • Download: Converse with the model by running the following command:
ollama run Elixpo/LlamaMedicine

Scientific Parameters:

  • Parameters: 760M
  • Layers: 16
  • Size: 4.9GB
  • Precision: 4bits
  • Train Precision: bf16
  • Chat Template: Llama 3.1
  • Mother Model: unsloth/llama-3.2-1b-instruct-bnb-4bit
  • Epoches: 60
  • GPU: T4 x2
  • System Requirements: GPU > 4GB | CUDA Pipelined
  • Learning Rate: 2e-4
  • warmup_steps: 5
  • Dataset Format: Shared_GPT
  • Trainer: SFT_Trainer
  • Primary Dataset (15): Elixpo/llamaMediTalk

Key Updates:

  1. Training with LLAMA 3.1: Mentioned that the model is trained using LLAMA 3.1.
  2. Fine-tuned with Dataset: Added information about the fine-tuning using the lavita/ChatDoctor-HealthCareMagic-100k dataset from Hugging Face.
  3. Model Availability: Clearly stated the model’s availability on Ollama and Hugging Face.
  4. Response Token Limit: Included the maximum token limit (512) for responses.
  5. Usage Example: Added a command to converse with the model and example questions.

This should provide a comprehensive overview of your model and its capabilities!

Model Availability

Customization

  • System Instructions: Modify the system instructions in the Modelfile to adjust the assistant’s behavior.
  • Fine-Tuning: The model has been fine-tuned using the lavita/ChatDoctor-HealthCareMagic-100k dataset but can be further fine-tuned with other datasets to specialize in specific medical fields.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • LLAMA 3.1: Powered by LLAMA models.
  • Ollama: Used for model deployment and management.
  • Hugging Face: Fine-tuned using lavita/ChatDoctor-HealthCareMagic-100k dataset.
  • Medical Databases: The model provides general knowledge, but it does not replace professional medical advice.