Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
⇅
chatglm · Ollama
Search for models on Ollama.
  • open-orca-platypus2

    Merge of the Open Orca OpenChat model and the Garage-bAInd Platypus 2 model. Designed for chat and code generation.

    13b

    384.4K  Pulls 17  Tags Updated  2 years ago

  • EntropyYue/chatglm3

    ChatGLM3-6B 是 ChatGLM 系列最新一代的开源模型,保留了前两代模型对话流畅、部署门槛低等众多优秀特性

    6b

    6,620  Pulls 2  Tags Updated  1 year ago

  • chatgph/medix-ph

    Medix-PH is a state-of-the-art language model specialized for the healthcare industry in the Philippines, enhancing medical communication, diagnostics, and patient care with advanced AI capabilities.

    422  Pulls 1  Tag Updated  1 year ago

  • chatgph/70b-instruct

    Chat-GPH (Chat-Pinoy Hypermodel) is a large language model LLM, characterized by exceptional intellect and an unwavering passion for crafting innovative solutions within the realm of technology.

    127  Pulls 1  Tag Updated  1 year ago

  • chatgph/gph-main

    A Pinoy LLM hypermodal with highly intellectual and passion for crafting groundbreaking solutions in the field of technology with an expertise that encompasses a comprehensive understanding of software engineering principles.

    vision

    2,284  Pulls 1  Tag Updated  1 year ago

  • Bored/GigaChat3.1-10B-A1.8B-q4_K_M

    GigaChat 3.1 Lightning is the compact instruct model of the GigaChat 3.1 family. It is a Mixture-of-Experts (MoE) model with 10B total parameters and 1.8B active parameters, designed for fast multilingual assistant workloads, reasoning, code

    28  Pulls 1  Tag Updated  yesterday

  • supachai/openthaigpt-1.0.0-chat

    Unofficial of 🇹🇭 OpenThaiGPT, Thai language chat models based on LLaMA on Ollama. (https://huggingface.co/collections/openthaigpt/openthaigpt-100-661a7a5920f5d49d132cf709)

    250  Pulls 2  Tags Updated  1 year ago

  • schroneko/calm3-22b-chat

    CyberAgentLM3 is a decoder-only language model pre-trained on 2.0 trillion tokens from scratch. CyberAgentLM3-Chat is a fine-tuned model specialized for dialogue use cases.

    432  Pulls 14  Tags Updated  1 year ago

  • RecognaNLP/chatbode

    ChatBode is a language model adjusted for the Portuguese language, developed from the InternLM2 model. This model was refined through the fine-tuning process using the UltraAlpaca dataset.

    7b 20b

    385  Pulls 25  Tags Updated  1 year ago

  • Huzderu/txgemma-27B-chat-Q8_0_GGUF

    Q8 quant of Google's txgemma 27B Chat model. All credits to Bartowski for the quant, I simply uploaded it to Ollama.

    206  Pulls 1  Tag Updated  1 year ago

  • microai/calm2-7b-chat

    CyberAgentLM2-Chat is a fine-tuned model of CyberAgentLM2 for dialogue use cases.

    280  Pulls 1  Tag Updated  1 year ago

  • theli/sus-chat

    SUS-Chat-34B is a 34B bilingual Chinese-English dialogue model, jointly released by the Southern University of Science and Technology and IDEA-CCNL.

    93  Pulls 1  Tag Updated  2 years ago

  • kaizu/bn_chat

    Fine-tuned version of llama2-v0.1-instruct from BanglaLLM in huggingface. Quantized to 4bit -> q4_k_m using llama.cpp.

    34  Pulls 1  Tag Updated  1 year ago

  • kaizu/bn_chat_2

    Fine-tuned version of llama2-v0.1-instruct from BanglaLLM in huggingface. Quantized to 4bit -> q4_k_m using llama.cpp. Trained on 2 * T4.

    18  Pulls 1  Tag Updated  1 year ago

  • chand1012/rocket

    A 3B parameter GPT-like model fine-tuned on a mix of publicly available datasets using DPO.

    331  Pulls 1  Tag Updated  2 years ago

  • charlestang06/openbiollm

    OpenBioLLM 8B (GGUF): A State-of-the-Art Open Source Biomedical Large Language Model

    284  Pulls 1  Tag Updated  1 year ago

  • charlestang06/pmc_llama_13b_gguf

    GGUF quantization of Llama 13B model that is instruction fine-tuned on medical Q/A.

    237  Pulls 1  Tag Updated  1 year ago

  • forzer/GigaChat3-10B-A1.8B

    GigaChat3-10B-A1.8B is a dialogue model of the GigaChat family. The model is based on a Mixture-of-Experts (MoE) architecture with 10B total and 1.8B active parameters. The architecture includes Multi-head Latent Attention and Multi-Token Prediction.

    363  Pulls 1  Tag Updated  2 months ago

  • Bored/gigachat3-10B-A1.8

    GigaChat3-10B-A1.8B is a dialogue model of the GigaChat family. The model is based on a Mixture-of-Experts (MoE) architecture with 10B total and 1.8B active parameters. The architecture includes Multi-head Latent Attention and Multi-Token Prediction.

    103  Pulls 1  Tag Updated  1 month ago

  • henryehogg/quantum-quinn

    Quantum Quinn is a small chatbot based on Tinyllama that is just serving as a learning environment for future models. This model is very basic and hallucinates a lot. This llm is just a proof of concept.

    13  Pulls 1  Tag Updated  3 months ago

© 2026 Ollama
Blog Contact