58 1 year ago

llama3 but with a maxed out context size

ollama run m/llama3:8b-max

Models

View all →

Readme

No readme