Import of crusoeai/Llama-3-8B-Instruct-1048k-GGUF with 1048k token limit!
8b
62 Pulls Updated 7 months ago
Updated 7 months ago
7 months ago
e5708d1f7ad0 · 3.2GB
model
archllama
·
parameters8.03B
·
quantizationQ2_K
3.2GB
Readme
https://huggingface.co/crusoeai/Llama-3-8B-Instruct-1048k-GGUF/tree/main
Now extended LLama 3 to 1048K / 1048576 tokens!
https://www.reddit.com/r/LocalLLaMA/comments/1cg8uzp/llama38binstruct_now_extended_1048576_context/