Models
Docs
Pricing
Sign in
Download
Models
Download
Docs
Pricing
Sign in
TCYZ
/
hayati
:5m
15
Downloads
Updated
2 months ago
Cancel
0.4m
5m
13m
hayati:5m
...
/
model
8f4214988a1a · 21MB
Metadata
general.architecture
llama
llama
llama.attention.head_count
3
3
llama.attention.head_count_kv
3
3
llama.attention.layer_norm_rms_epsilon
1e-05
1e-05
llama.block_count
2
2
llama.context_length
64
64
llama.embedding_length
48
48
llama.feed_forward_length
128
128
tokenizer.ggml.model
llama
llama
tokenizer.ggml.scores
[0, 0, 0, 0, 0, ...]
[0, 0, 0, 0, 0, ...]
tokenizer.ggml.token_type
[1, 1, 1, 1, 1, ...]
[1, 1, 1, 1, 1, ...]
tokenizer.ggml.tokens
[!, ", #, $, %, ...]
[!, ", #, $, %, ...]
Tensor
Name
Type
Shape
token_embd.weight
F32
F32
[48, 50257]
blk.0
blk.0.attn_k.weight
F32
F32
[48, 48]
blk.0.attn_norm.weight
F32
F32
[48]
blk.0.attn_output.weight
F32
F32
[48, 48]
blk.0.attn_q.weight
F32
F32
[48, 48]
blk.0.attn_v.weight
F32
F32
[48, 48]
blk.0.ffn_down.weight
F32
F32
[128, 48]
blk.0.ffn_gate.weight
F32
F32
[48, 128]
blk.0.ffn_norm.weight
F32
F32
[48]
blk.0.ffn_up.weight
F32
F32
[48, 128]
blk.1
blk.1.attn_k.weight
F32
F32
[48, 48]
blk.1.attn_norm.weight
F32
F32
[48]
blk.1.attn_output.weight
F32
F32
[48, 48]
blk.1.attn_q.weight
F32
F32
[48, 48]
blk.1.attn_v.weight
F32
F32
[48, 48]
blk.1.ffn_down.weight
F32
F32
[128, 48]
blk.1.ffn_gate.weight
F32
F32
[48, 128]
blk.1.ffn_norm.weight
F32
F32
[48]
blk.1.ffn_up.weight
F32
F32
[48, 128]
output.weight
F32
F32
[48, 50257]
output_norm.weight
F32
F32
[48]