36 Downloads Updated yesterday
ollama run mohamedbastawih/gemma4-e4b-coding
ollama launch claude --model mohamedbastawih/gemma4-e4b-coding
ollama launch codex-app --model mohamedbastawih/gemma4-e4b-coding
ollama launch openclaw --model mohamedbastawih/gemma4-e4b-coding
ollama launch hermes --model mohamedbastawih/gemma4-e4b-coding
ollama launch codex --model mohamedbastawih/gemma4-e4b-coding
ollama launch opencode --model mohamedbastawih/gemma4-e4b-coding
Model Identifier: mohamedbastawih/gemma4-e4b-coding
Library Source: ollama.com/mohamedbastawih/gemma4-e4b-coding
Role: Local OpenCode Agent
Primary Objective: To serve as the intelligence engine for an autonomous coding agent capable of interacting with local development environments, managing file systems, and executing code.
Gemma4-e4b-coding is specifically tuned to bridge the gap between a standard LLM and a functional software engineer. It does not just suggest code; it generates actionable instructions (tool calls) to manipulate files and run commands locally.
The model is optimized for the following “Agent” behaviors:
The model is trained to recognize when a natural language request requires a system action. Instead of a conversational response, it outputs a structured payload that a local controller (the Agent Wrapper) can execute.
The model is designed to work with an agent that provides the following permissions: * Read/Analyze: The ability to ingest existing codebases to understand context before proposing changes. * Write/Modify: The ability to create new files or apply precise edits to existing source code. * Project Mapping: The ability to list directories to navigate complex project architectures.
The model operates on an Observation \(\rightarrow\) Thought \(\rightarrow\) Action cycle:
1. Observe: Read the file or error message.
2. Think: Determine the logical fix or feature implementation.
3. Act: Call the write_file or execute_cmd tool.
To use this model as the core of your agent, pull it directly from the Ollama library:
ollama pull mohamedbastawih/gemma4-e4b-coding
Because LLMs cannot directly touch your hardware, Gemma4-e4b-coding acts as the Brain, while a local script (Python/Node.js) acts as the Hands.
Sequence of Operation:
1. User Input: “Update the API endpoint in server.js to handle 404 errors.”
2. Model Inference: Gemma4-e4b-coding triggers a tool call:
\(\rightarrow\) {"action": "read_file", "path": "server.js"}
3. Local Wrapper: Reads the file \(\rightarrow\) Feeds content back to the model.
4. Model Inference: Processes code \(\rightarrow\) Generates the fix:
\(\rightarrow\) {"action": "write_file", "path": "server.js", "content": "...updated code..."}
5. Local Wrapper: Commits the change to the disk.
The model is optimized to produce the following JSON-style calls:
| Tool | Parameter | Purpose |
|---|---|---|
read_file |
path |
Retrieves file content for analysis. |
write_file |
path, content |
Creates or updates a local file. |
list_dir |
path |
Explores the project folder structure. |
execute_cmd |
command |
Runs terminal commands (tests, installs, etc). |
Depending on your local wrapper, you can run the model via:
ollama run mohamedbastawih/gemma4-e4b-coding
To get the best agentic performance, provide a system prompt that reinforces its role:
“You are the OpenCode Agent. You have access to a local filesystem. Do not just explain the code; use the provided tools to read, write, and verify your changes. Always read a file before editing it.”
⚠️ Warning: This model is designed to generate commands that modify your local files.
write_file or execute_cmd action.read_file and list_dir tools.| Feature | Standard Gemma | Gemma4-e4b-coding (Agent) |
|---|---|---|
| Output Style | Descriptive/Conversational | Actionable/Tool-Oriented |
| File Interaction | Copy/Paste Manual | Direct Read/Write (via Wrapper) |
| Project Context | User-provided snippets | Full Directory Analysis |
| Execution | Suggests commands | Triggers actual shell execution |
| Availability | General | ollama pull mohamedbastawih/gemma4-e4b-coding |