352 2 months ago

A model based on the GLM-4.6v-flash:9b q5_k_m, and uncensored. For local use I recommend editing the model context in modelfile as it is set to 128k. #EDIT: New local optimised model same with context 4096 https://ollama.com/ShreyanGondaliya/s5-reduced

ollama run ShreyanGondaliya/s5

Models

View all →

1 model

s5:latest

7.1GB · 128K context window · Text · 2 months ago

Readme

A model based on the GLM-4.6v-flash:9b q5_k_m, and uncensored. For local use I recommend editing the model context in modelfile as it is set to 128k. Please note that a new model I have made with same weights but lowered context for same reasoning but local usage - https://ollama.com/ShreyanGondaliya/s5-reduced