The unofficial implementation of Lawyer LLaMA, with a quantization Q4_0

13B

47 Pulls Updated 7 weeks ago

Readme

When using the GPU, the required Ollama Docker image version is 0.1.32; for the CPU, the version is not restricted, but the speed will be slower.

template = qwen1.5

Citation

@misc{huang2023lawyer,
title={Lawyer LLaMA Technical Report},
author={Quzhe Huang and Mingxu Tao and Chen Zhang and Zhenwei An and Cong Jiang and Zhibin Chen and Zirui Wu and Yansong Feng},
year={2023},
eprint={2305.15062},
archivePrefix={arXiv},
primaryClass={cs.CL}
}

@misc{Lawyer-LLama,
title={Lawyer Llama},
author={Quzhe Huang and Mingxu Tao and Chen Zhang and Zhenwei An and Cong Jiang and Zhibin Chen and Zirui Wu and Yansong Feng},
year={2023},
publisher={GitHub},
journal={GitHub repository},
howpublished={\url{https://github.com/AndrewZhe/lawyer-llama}},
}