TCYZ/ nano:1.2m

11 2 months ago

Model şu an beta aşamasındadır. 1M parametrenin getirdiği sınırlardan dolayı karmaşık cümle yapılarında bozulmalar yaşanabilir. Geliştirme süreci devam etmektedir.

1m 1.2m
810d4b136236 · 4.3MB
    Metadata
  • general.architecture
    llama
  • llama.attention.head_count
    4
  • llama.attention.head_count_kv
    4
  • llama.attention.layer_norm_rms_epsilon
    1e-06
  • llama.block_count
    5
  • llama.context_length
    64
  • llama.embedding_length
    128
  • llama.feed_forward_length
    256
  • llama.rope.dimension_count
    32
  • tokenizer.ggml.bos_token_id
    0
  • tokenizer.ggml.eos_token_id
    2
  • tokenizer.ggml.merges
    [Ä ±, o r, Ġ b, a n, e r, ...]
  • tokenizer.ggml.model
    gpt2
  • tokenizer.ggml.padding_token_id
    1
  • tokenizer.ggml.tokens
    [<s>, <pad>, </s>, <unk>, <mask>, ...]
  • Tensor
  • token_embd.weight
    F32
    [128, 1000]
  • blk.0
  • blk.0.attn_k.weight
    F32
    [128, 128]
  • blk.0.attn_norm.weight
    F32
    [128]
  • blk.0.attn_output.weight
    F32
    [128, 128]
  • blk.0.attn_q.weight
    F32
    [128, 128]
  • blk.0.attn_v.weight
    F32
    [128, 128]
  • blk.0.ffn_down.weight
    F32
    [256, 128]
  • blk.0.ffn_gate.weight
    F32
    [128, 256]
  • blk.0.ffn_norm.weight
    F32
    [128]
  • blk.0.ffn_up.weight
    F32
    [128, 256]
  • blk.1
  • blk.1.attn_k.weight
    F32
    [128, 128]
  • blk.1.attn_norm.weight
    F32
    [128]
  • blk.1.attn_output.weight
    F32
    [128, 128]
  • blk.1.attn_q.weight
    F32
    [128, 128]
  • blk.1.attn_v.weight
    F32
    [128, 128]
  • blk.1.ffn_down.weight
    F32
    [256, 128]
  • blk.1.ffn_gate.weight
    F32
    [128, 256]
  • blk.1.ffn_norm.weight
    F32
    [128]
  • blk.1.ffn_up.weight
    F32
    [128, 256]
  • blk.2
  • blk.2.attn_k.weight
    F32
    [128, 128]
  • blk.2.attn_norm.weight
    F32
    [128]
  • blk.2.attn_output.weight
    F32
    [128, 128]
  • blk.2.attn_q.weight
    F32
    [128, 128]
  • blk.2.attn_v.weight
    F32
    [128, 128]
  • blk.2.ffn_down.weight
    F32
    [256, 128]
  • blk.2.ffn_gate.weight
    F32
    [128, 256]
  • blk.2.ffn_norm.weight
    F32
    [128]
  • blk.2.ffn_up.weight
    F32
    [128, 256]
  • blk.3
  • blk.3.attn_k.weight
    F32
    [128, 128]
  • blk.3.attn_norm.weight
    F32
    [128]
  • blk.3.attn_output.weight
    F32
    [128, 128]
  • blk.3.attn_q.weight
    F32
    [128, 128]
  • blk.3.attn_v.weight
    F32
    [128, 128]
  • blk.3.ffn_down.weight
    F32
    [256, 128]
  • blk.3.ffn_gate.weight
    F32
    [128, 256]
  • blk.3.ffn_norm.weight
    F32
    [128]
  • blk.3.ffn_up.weight
    F32
    [128, 256]
  • blk.4
  • blk.4.attn_k.weight
    F32
    [128, 128]
  • blk.4.attn_norm.weight
    F32
    [128]
  • blk.4.attn_output.weight
    F32
    [128, 128]
  • blk.4.attn_q.weight
    F32
    [128, 128]
  • blk.4.attn_v.weight
    F32
    [128, 128]
  • blk.4.ffn_down.weight
    F32
    [256, 128]
  • blk.4.ffn_gate.weight
    F32
    [128, 256]
  • blk.4.ffn_norm.weight
    F32
    [128]
  • blk.4.ffn_up.weight
    F32
    [128, 256]
  • output.weight
    F32
    [128, 1000]
  • output_norm.weight
    F32
    [128]