Models
GitHub
Discord
Turbo
Sign in
Download
Models
Download
GitHub
Discord
Sign in
nsheth
/
llama-3-lumimaid-8b-v0.1-iq-imatrix
3,017
Downloads
Updated
1 year ago
It uses this one Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes. for less than 8gb vram
It uses this one Q4_K_M-imat (4.89 BPW) quant for up to 12288 context sizes. for less than 8gb vram
Cancel
vision
Name
1 model
Size
Context
Input
llama-3-lumimaid-8b-v0.1-iq-imatrix:latest
e3c3c83ca732
• 5.5GB • 8K context window •
Text input • 1 year ago
Text input • 1 year ago
llama-3-lumimaid-8b-v0.1-iq-imatrix:latest
5.5GB
8K
Text
e3c3c83ca732
· 1 year ago