Models
GitHub
Discord
Docs
Cloud
Sign in
Download
Models
Download
GitHub
Discord
Docs
Cloud
Sign in
denisavetisyan
/
gemma3-27b-q4_k_m-32k
:latest
73
Downloads
Updated
2 months ago
Customized Gemma 3 27B (Q4_K_M) with 32k context window. Optimized for 24GB VRAM GPUs (like NVIDIA GeForce RTX 4090, NVIDIA GeForce RTX 3090, AMD Radeon RX 7900 XTX on Linux).
Customized Gemma 3 27B (Q4_K_M) with 32k context window. Optimized for 24GB VRAM GPUs (like NVIDIA GeForce RTX 4090, NVIDIA GeForce RTX 3090, AMD Radeon RX 7900 XTX on Linux).
Cancel
vision
Updated 2 months ago
2 months ago
bb03b0e987fe · 17GB ·
model
arch
gemma3
·
parameters
27.4B
·
quantization
Q4_K_M
17GB
template
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Rol
358B
params
{ "num_ctx": 32000, "stop": [ "<end_of_turn>" ], "temperature": 1, "top_
93B
Readme
No readme
Write
Preview
Paste, drop or click to upload images (.png, .jpeg, .jpg, .svg, .gif)