SimonPu/
gemma3:latest

147 4 months ago

The Google Gemma 3 models are multimodal—processing text and images—and feature a 128K context window with support for over 140 languages....

vision

4 months ago

2b36c49aae03 · 7.8GB

gemma3
·
11.8B
·
Q4_K_M
Gemma Terms of Use Last modified: February 21, 2024 By using, reproducing, modifying, distributing,
{ "num_ctx": 16384, "stop": [ "<end_of_turn>" ], "temperature": 1, "top_
{{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 }} {{- if or (eq .Rol

Readme

Gemma 3 Model

This model requires Ollama 0.6 or later. Download Ollama Here, both the 12B and 27B models are quantized using Q4_K_M. The only difference sets the size of the context window used to generate the next token.

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.

Gemma Terms of Use: Terms

Model Page: Gemma 3

Usage and Limitations

These models have certain limitations that users should be aware of.

Intended Usage

Open vision-language models (VLMs) models have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development.

  • Content Creation and Communication
    • Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts.
    • Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications.
    • Text Summarization: Generate concise summaries of a text corpus, research papers, or reports.
    • Image Data Extraction: These models can be used to extract, interpret, and summarize visual data for text communications.
  • Research and Education
    • Natural Language Processing (NLP) and VLM Research: These models can serve as a foundation for researchers to experiment with VLM and NLP techniques, develop algorithms, and contribute to the advancement of the field.
    • Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice.
    • Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics.

Limitations

  • Training Data
    • The quality and diversity of the training data significantly influence the model’s capabilities. Biases or gaps in the training data can lead to limitations in the model’s responses.
    • The scope of the training dataset determines the subject areas the model can handle effectively.
  • Context and Task Complexity
    • Models are better at tasks that can be framed with clear prompts and instructions.
    • Open-ended or highly complex tasks might be challenging.
    • A model’s performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point).
  • Language Ambiguity and Nuance
    • Natural language is inherently complex. Models might struggle to grasp subtle nuances, sarcasm, or figurative language.
  • Factual Accuracy
    • Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements.
  • Common Sense
    • Models rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations.

Citation

@article{gemma_2025,
    title={Gemma 3},
    url={https://goo.gle/Gemma3Report},
    publisher={Kaggle},
    author={Gemma Team},
    year={2025}
}