See axolotl config
axolotl version: 0.6.0
base_model: /workspace/axolotl/in
# optionally might have model_type or tokenizer_type
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: AiAF/UFOs-Finetune-V1.1
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: json
data_files: plain_qa_list.jsonl
ds_type: json
type: chat_template
chat_template: chatml
field_messages: conversations
message_field_role: from
message_field_content: value
roles:
user:
- human
assistant:
- gpt
system:
- system
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/UFOs-Finetune-V1.1/out
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
max_steps: 100000
wandb_project: "UFO_LLM_Finetune"
wandb_entity:
wandb_watch: "all"
wandb_name: "UFO_LLM_Finetune-V1.1"
wandb_log_model: "false"
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 10
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
UFOs-Finetune-V1.1
This model was trained from scratch on the json dataset. It achieves the following results on the evaluation set:
- Loss: 1.7367
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5018 | 0.1739 | 1 | 1.5418 |
1.4881 | 0.3478 | 2 | 1.5242 |
1.4127 | 0.6957 | 4 | 1.4386 |
1.3943 | 1.0 | 6 | 1.3957 |
1.3169 | 1.3478 | 8 | 1.3707 |
1.2603 | 1.6957 | 10 | 1.3561 |
1.2147 | 2.0 | 12 | 1.3535 |
1.0719 | 2.3478 | 14 | 1.3740 |
0.9741 | 2.6957 | 16 | 1.3890 |
1.024 | 3.0 | 18 | 1.4040 |
0.823 | 3.3478 | 20 | 1.4536 |
0.7372 | 3.6957 | 22 | 1.5242 |
0.7555 | 4.0 | 24 | 1.5201 |
0.622 | 4.3478 | 26 | 1.5416 |
0.5762 | 4.6957 | 28 | 1.5996 |
0.5535 | 5.0 | 30 | 1.6379 |
0.4547 | 5.3478 | 32 | 1.6690 |
0.4487 | 5.6957 | 34 | 1.6886 |
0.4435 | 6.0 | 36 | 1.6949 |
0.3969 | 6.3478 | 38 | 1.7070 |
0.3988 | 6.6957 | 40 | 1.7213 |
0.3917 | 7.0 | 42 | 1.7302 |
0.3746 | 7.3478 | 44 | 1.7348 |
0.3451 | 7.6957 | 46 | 1.7361 |
0.3513 | 8.0 | 48 | 1.7368 |
0.3572 | 8.3478 | 50 | 1.7367 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.