MiniMind Pretrained Model
Chinese language model trained on pretrain dataset.
Model Details
- Architecture: Transformer
- Parameters: 26.878M
- Dimensions: 512
- Layers: 8
- Attention Heads: 8
- Vocabulary Size: 32000
- Max Sequence Length: 1024
Training Data
- Pretrained on Chinese text corpus
- Dataset size: 4.33GB
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("samz/minimind-pretrain")
tokenizer = AutoTokenizer.from_pretrained("samz/minimind-pretrain")
text = "今天天气真不错"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
result = tokenizer.decode(outputs[0])
print(result)
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.