Models
Docs
Pricing
Sign in
Download
Models
Download
Docs
Pricing
Sign in
TCYZ
/
cokertme
:28m
71
Downloads
Updated
2 months ago
Türkiye'nin yapay zeka hamlesinde stratejik bir boşluğu dolduran Çökertme serisi, devasa modellerin aksine "her cihazda çalışan zeka" mottosuyla geliştirilmiştir. TCYZ projesi kapsamında sunulan bu aile, en küçük donanımla çalışabilir.
Türkiye'nin yapay zeka hamlesinde stratejik bir boşluğu dolduran Çökertme serisi, devasa modellerin aksine "her cihazda çalışan zeka" mottosuyla geliştirilmiştir. TCYZ projesi kapsamında sunulan bu aile, en küçük donanımla çalışabilir.
Cancel
1m
1.6m
6.7m
28m
57m
cokertme:28m
...
/
model
1b3cffc7bbd9 · 117MB
Metadata
general.architecture
llama
llama
llama.attention.head_count
4
4
llama.attention.head_count_kv
4
4
llama.attention.layer_norm_rms_epsilon
1e-05
1e-05
llama.block_count
4
4
llama.context_length
256
256
llama.embedding_length
256
256
llama.feed_forward_length
682
682
tokenizer.ggml.model
llama
llama
tokenizer.ggml.scores
[0, 0, 0, 0, 0, ...]
[0, 0, 0, 0, 0, ...]
tokenizer.ggml.token_type
[1, 1, 1, 1, 1, ...]
[1, 1, 1, 1, 1, ...]
tokenizer.ggml.tokens
[!, ", #, $, %, ...]
[!, ", #, $, %, ...]
Tensor
Name
Type
Shape
token_embd.weight
F32
F32
[256, 50257]
blk.0
blk.0.attn_k.weight
F32
F32
[256, 256]
blk.0.attn_norm.weight
F32
F32
[256]
blk.0.attn_output.weight
F32
F32
[256, 256]
blk.0.attn_q.weight
F32
F32
[256, 256]
blk.0.attn_v.weight
F32
F32
[256, 256]
blk.0.ffn_down.weight
F32
F32
[682, 256]
blk.0.ffn_gate.weight
F32
F32
[256, 682]
blk.0.ffn_norm.weight
F32
F32
[256]
blk.0.ffn_up.weight
F32
F32
[256, 682]
blk.1
blk.1.attn_k.weight
F32
F32
[256, 256]
blk.1.attn_norm.weight
F32
F32
[256]
blk.1.attn_output.weight
F32
F32
[256, 256]
blk.1.attn_q.weight
F32
F32
[256, 256]
blk.1.attn_v.weight
F32
F32
[256, 256]
blk.1.ffn_down.weight
F32
F32
[682, 256]
blk.1.ffn_gate.weight
F32
F32
[256, 682]
blk.1.ffn_norm.weight
F32
F32
[256]
blk.1.ffn_up.weight
F32
F32
[256, 682]
blk.2
blk.2.attn_k.weight
F32
F32
[256, 256]
blk.2.attn_norm.weight
F32
F32
[256]
blk.2.attn_output.weight
F32
F32
[256, 256]
blk.2.attn_q.weight
F32
F32
[256, 256]
blk.2.attn_v.weight
F32
F32
[256, 256]
blk.2.ffn_down.weight
F32
F32
[682, 256]
blk.2.ffn_gate.weight
F32
F32
[256, 682]
blk.2.ffn_norm.weight
F32
F32
[256]
blk.2.ffn_up.weight
F32
F32
[256, 682]
blk.3
blk.3.attn_k.weight
F32
F32
[256, 256]
blk.3.attn_norm.weight
F32
F32
[256]
blk.3.attn_output.weight
F32
F32
[256, 256]
blk.3.attn_q.weight
F32
F32
[256, 256]
blk.3.attn_v.weight
F32
F32
[256, 256]
blk.3.ffn_down.weight
F32
F32
[682, 256]
blk.3.ffn_gate.weight
F32
F32
[256, 682]
blk.3.ffn_norm.weight
F32
F32
[256]
blk.3.ffn_up.weight
F32
F32
[256, 682]
output.weight
F32
F32
[256, 50257]
output_norm.weight
F32
F32
[256]