846 4 months ago

GLM-4-0414 32B with 128k context (YaRN RoPE scaling). Needs ollama 0.6.6

tools

Models

View all →

Readme

Quantized with YaRN RoPE scaling to 128k context (factor 4). This needs Ollama >=0.6.6 to run. The num_ctx in the Modelfile defaults to 64k just because I don’t have gobs of VRAM.