Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF

70B

221 Pulls Updated 2 months ago

Readme

Llama 3 Uncensored Lumi Tess Gradient 70B - GGUF

This repository contains the Ollama-compatible version of the Llama 3 Uncensored Lumi Tess Gradient 70B model in GGUF format.

Model Information

  • Original Creator: ryzen88
  • Quantized by: backyardai
  • Model Type: Llama 3
  • Language(s): English

Description

This is a GGUF (GPT-Generated Unified Format) version of the Llama 3 Uncensored Lumi Tess Gradient 70B model. It’s an uncensored model with a long context, created using a breadcrumb ties merger of Instruct-gradient, Lumimaid, and Tess models. The model works well with a wide range of sampler settings.

Key features:
- Uncensored responses
- Long context (262,144 tokens)
- Compatible with llama.cpp-based applications
- Quantized for efficient resource usage

Model Details

  • Architecture: Llama
  • Model Size: 70.6B parameters
  • Context Length: 262,144 tokens

Ethical Considerations

As an uncensored model, this AI may generate content that could be considered offensive or inappropriate. Use responsibly and in compliance with applicable laws and ethical guidelines.

Support

For issues related to the model itself, please open an issue in this repository. For Ollama-specific questions, refer to the Ollama documentation.

Acknowledgments

Thanks to ryzen88 for creating the original model and to the teams behind Instruct-gradient, Lumimaid, and Tess models.

backyardai/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF