Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Datasets:
chuckchen
/
tokenizer-vocab
like
1
License:
creativeml-openrail-m
Dataset card
Files
Files and versions
Community
main
tokenizer-vocab
1 contributor
History:
2 commits
chuckchen
Add vocabulary dumped from various tokenizers together with processing scripts
7ab99c9
over 1 year ago
.gitattributes
Safe
2.27 kB
initial commit
over 1 year ago
README.md
Safe
39 Bytes
initial commit
over 1 year ago
dump_tiktoken.py
856 Bytes
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_bloom_7b1.json
7.57 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_chinese_alpaca_7b.json
1.05 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_chinese_alpaca_7b_f.json
238 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_chinese_llama_7b.json
1.05 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_chinese_llama_7b_f.json
238 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_cl100k_base.json
2.13 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_cl100k_base_f.json
16 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_ernie-1.0-base-zh.json
332 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_ernie-3.0-base-zh.json
735 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_gpt-j-6B.json
1.1 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_llama_7b_hf.json
680 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_llama_7b_hf_f.json
12.6 kB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_p50k_base.json
1.06 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_p50k_base_f.json
802 Bytes
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_r50k_base.json
1.06 MB
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
encoder_vocab_r50k_base_f.json
802 Bytes
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
extract.py
478 Bytes
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago
filter_zh.py
955 Bytes
Add vocabulary dumped from various tokenizers together with processing scripts
over 1 year ago