Models
Docs
Pricing
Sign in
Download
Models
Download
Docs
Pricing
Sign in
TCYZ
/
eng
:1.8m
14
Downloads
Updated
1 month ago
Our first English model.
Our first English model.
Cancel
1m
1.8m
3m
eng:1.8m
...
/
model
52100e85de5a · 7.3MB
Metadata
general.architecture
llama
llama
llama.attention.head_count
1
1
llama.attention.head_count_kv
1
1
llama.attention.layer_norm_rms_epsilon
1e-06
1e-06
llama.block_count
2
2
llama.context_length
256
256
llama.embedding_length
256
256
llama.feed_forward_length
512
512
llama.rope.dimension_count
256
256
tokenizer.ggml.bos_token_id
0
0
tokenizer.ggml.eos_token_id
2
2
tokenizer.ggml.merges
[Ġ i, Ġ a, e r, o n, s t, ...]
[Ġ i, Ġ a, e r, o n, s t, ...]
tokenizer.ggml.model
gpt2
gpt2
tokenizer.ggml.padding_token_id
1
1
tokenizer.ggml.tokens
[<s>, <pad>, </s>, <unk>, <mask>, ...]
[<s>, <pad>, </s>, <unk>, <mask>, ...]
Tensor
Name
Type
Shape
token_embd.weight
F32
F32
[256, 1001]
blk.0
blk.0.attn_k.weight
F32
F32
[256, 256]
blk.0.attn_norm.weight
F32
F32
[256]
blk.0.attn_output.weight
F32
F32
[256, 256]
blk.0.attn_q.weight
F32
F32
[256, 256]
blk.0.attn_v.weight
F32
F32
[256, 256]
blk.0.ffn_down.weight
F32
F32
[512, 256]
blk.0.ffn_gate.weight
F32
F32
[256, 512]
blk.0.ffn_norm.weight
F32
F32
[256]
blk.0.ffn_up.weight
F32
F32
[256, 512]
blk.1
blk.1.attn_k.weight
F32
F32
[256, 256]
blk.1.attn_norm.weight
F32
F32
[256]
blk.1.attn_output.weight
F32
F32
[256, 256]
blk.1.attn_q.weight
F32
F32
[256, 256]
blk.1.attn_v.weight
F32
F32
[256, 256]
blk.1.ffn_down.weight
F32
F32
[512, 256]
blk.1.ffn_gate.weight
F32
F32
[256, 512]
blk.1.ffn_norm.weight
F32
F32
[256]
blk.1.ffn_up.weight
F32
F32
[256, 512]
output.weight
F32
F32
[256, 1001]
output_norm.weight
F32
F32
[256]