Reverb-7b is a 7 billion parameter language model developed by Ozone AI. available in [F16, q8_0, q6_K, q4_K_S]
18 Pulls Updated 5 weeks ago
Updated 5 weeks ago
5 weeks ago
689c4d60ee99 · 15GB
Readme
language: - en
license: apache-2.0
tags: - causal-lm - text-generation - 7b - ozone-ai - reverb - open-source
datasets: []
metrics: - type: MMLU Pro value: 0.4006
Reverb-7b: Ozone AI
Model Description
Reverb-7b is a 7 billion parameter language model developed by Ozone AI. It is a causal language model designed for text generation and various downstream tasks. This is the third model release by Ozone AI.
Join Our Discord
Intended Uses & Limitations
Reverb-7b is intended for research and chatting purposes in natural language processing. Potential use cases include:
- Text generation
- Question answering
- Summarization
- Code generation (performance may vary)
- Creative writing
Limitations:
- Like all language models, Reverb-7b can generate biased or harmful content. Users should implement appropriate safeguards to mitigate these risks.
- The model’s performance may vary depending on the specific task and dataset.
- Important Safety Note: We have observed that at lower quantization levels (e.g., below 4-bit), the model’s safety guardrails may be less effective. The model may be more likely to generate inappropriate or harmful content, or to solicit personal information. Exercise extreme caution when using Reverb-7b at these lower quantization levels and implement strict input/output filtering.
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ozone-ai/Reverb-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "The quick brown fox jumps over the lazy dog."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(input_ids, max_length=50)
print(tokenizer.decode(generation_output[0]))
Evaluation
Benchmarks
The following table shows the performance of Reverb-7b on various benchmarks:
Benchmark | Metric | Value |
---|---|---|
MMLU Pro | Average Accuracy | 0.4006 |
MMLU Pro | Biology | 0.6904 |
MMLU Pro | Business | 0.3143 |
MMLU Pro | Chemistry | 0.2314 |
MMLU Pro | Computer Science | 0.4000 |
MMLU Pro | Economics | 0.5758 |
MMLU Pro | Engineering | 0.3148 |
MMLU Pro | Health | 0.5183 |
MMLU Pro | History | 0.4934 |
MMLU Pro | Law | 0.3315 |
MMLU Pro | Math | 0.2983 |
MMLU Pro | Other | 0.4372 |
MMLU Pro | Philosophy | 0.4409 |
MMLU Pro | Physics | 0.2910 |
MMLU Pro | Psychology | 0.5990 |
Training Details
- Training infrastructure: 1x H100 PCIE
- Training procedure: 1 epoch
Contact
For questions or feedback, please contact us through contact@ozone-ai.com or https://ozone-ai.com
Attribution
Built with Qwen Users of this model must agree with the Qwen license agreement
Model Card Authors
Vneq - CEO @ Ozone AI Tristan - CEO @ ShuttleAI, CTO @ Ozone AI