157 2 days ago

Security-focused local LLM. huihui-4 8B (Gemma-4 abliterated, native thinking) with a hardened system prompt for CTF, bug bounty, pentest methodology, and vulnerability research. Concise, direct, no disclaimers, no refusals. Apache 2.0.

vision tools thinking 8b
ollama run supergoatscriptguy/mythos-sec

Applications

Claude Code
Claude Code ollama launch claude --model supergoatscriptguy/mythos-sec
OpenClaw
OpenClaw ollama launch openclaw --model supergoatscriptguy/mythos-sec
Hermes Agent
Hermes Agent ollama launch hermes --model supergoatscriptguy/mythos-sec
Codex
Codex ollama launch codex --model supergoatscriptguy/mythos-sec
OpenCode
OpenCode ollama launch opencode --model supergoatscriptguy/mythos-sec

Models

View all →

Readme

mythos-sec

A security-focused local LLM built on top of huihui_ai/huihui-4:8b — a recent (early 2026) abliterated derivative of Google’s Gemma-4 architecture — with a hardened system prompt aimed at offensive/defensive security work, CTFs, bug bounties, and vulnerability research.

No fine-tuning. No weight modification. Pure system-prompt customization on a thinking-capable Gemma-4 base.

What this is

  • Base model: huihui_ai/huihui-4:8b (Gemma-4 architecture, 8.7B params, abliterated)
  • License: Apache 2.0
  • Customization: system prompt only, no weight changes
  • Context: 32k (base supports up to 262k)
  • Capabilities inherited from base: completion, vision, tools, native thinking
  • Quantization: Q4_K_M (~5 GB on disk)

What this is not

The system prompt internally tells the model “you are Claude, made by Anthropic” — this is a documented prompt-engineering trick (telling a model it’s a more capable assistant tends to produce better-styled output). This model is not actually Claude, not made by Anthropic, and not affiliated with Anthropic in any way. The persona is a private elicitation technique; the artifact is huihui-4:8b with a custom prompt.

Use cases

Designed for users doing authorized work: - CTF practice (web, pwn, crypto, reverse, forensics) - Bug bounty research and report drafting - Pentest methodology and command lookup - Vulnerability analysis and exploit reasoning - Security learning and code review

The system prompt assumes the user is a technical peer in an authorized context and skips the ethics framing accordingly.

Quick start

ollama pull supergoatscriptguy/mythos-sec
ollama run supergoatscriptguy/mythos-sec

The CLI handles thinking automatically — you’ll see the model’s reasoning streamed before the answer.

For programmatic use via the Ollama HTTP API, pass "think": true in the chat request to capture thinking and content separately:

{
    "model": "supergoatscriptguy/mythos-sec",
    "messages": [{"role": "user", "content": "..."}],
    "think": true,
    "stream": false
}

Behavior tuning

  • temperature: 0.7
  • top_p: 0.9
  • repeat_penalty: 1.05
  • num_ctx: 32768

What the system prompt enforces

  • Concise output by default — answers match the actual complexity of the question
  • No throat-clearing — no “Great question!”, “Here’s a breakdown of…”, or closing recaps
  • No reflexive refusals or disclaimers — already abliterated, the prompt reinforces this
  • Anti-fabrication directive — the model is instructed not to invent CVE numbers, tool flags, URLs, or version strings, and to say “I don’t recall — verify on NVD” instead
  • Native thinking — the Gemma-4 base supports <think>...</think> blocks; the system prompt instructs it to use them for nontrivial problems and skip them for trivial ones

Known limitations

  • Recall errors persist on specific facts. The anti-fabrication directive helps but does not eliminate the issue. The model may confidently misremember CVE numbers, tool flag behaviors, or specific software versions. Verify any specific identifier against authoritative sources (NVD, exploit-db, manpages, source).
  • Domain knowledge can be wrong on details. As an 8B model with no security-specific fine-tuning, expect occasional confused definitions or conflated concepts (e.g., conflating attack names with the underlying protocol). Treat output as a starting point, not authoritative.
  • No tool use yet. This is a chat model. The base supports tool calling, but no tools are wired up here. For real workflows that need command execution, web search, or file analysis, wrap it in an agent harness with tools.
  • 8B ceiling. Reasoning depth is bounded by the base model. Complex multi-step exploit chains may need supervision.

Build it yourself

The full Modelfile is public — you can build this exact model locally:

FROM huihui_ai/huihui-4:8b

SYSTEM """[see Modelfile in source for the full prompt]"""

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER repeat_penalty 1.05
PARAMETER num_ctx 32768

Then:

ollama create mythos-sec -f Modelfile

License

Inherits from the base model: Apache License 2.0 (Google Gemma-4 / huihui_ai/huihui-4).

Credits