You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID

dream interpreter - balanced model

Model Details

Model Description

nidra_v2 has been fine-tuned on a dataset of 1600 inout-output pairs to interpret dreams, providing an overall summary of dreams.

  • Developed by: M1K3wn
  • Model type: Seq2Seq
  • Language(s) (NLP): English
  • License: [More Information Needed]
  • Finetuned from model: google/flan-t5-base

Model Sources

  • Repository: TBC

Uses

Is intended to be used to generate balanced and stable dream interpretations.

Direct Use

Can generate 80-120 word analysis of dreams when prompted with: "Interpret this dream: < input dream >"

Out-of-Scope Use

Will likely not work well for anything but what it was trained for.

Bias, Risks, and Limitations

This model is wild and it's advice should not be followed by anyone, anywhere, ever.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

How to Get Started with the Model

"Interpret this dream: < input dream >"

Training Details

Training Data

All synthetically generated. 1600 input-output pairs. Example: "input": "Interpret this dream: The front yard, once a place of laughter and play, now felt vast and barren, as though it had lost its sense of life and joy.",

"target": "The barren yard represents a sense of emotional emptiness or disconnection from something that once brought you happiness. The transformation of the yard from a space of joy to one of emptiness reflects how familiar aspects of your life can feel lifeless or distant as you change. This dream invites you to examine what might have caused this shift in your emotional landscape and how you can reconnect with the vitality that once filled this space."

Training Procedure

Basically shots in the dark, refining LoRA params and hyperparams on the fly.

Training Hyperparameters

train_ratio: float = 0.9 batch_size: int = 2
gradient_accumulation_steps: int = 6 num_epochs: int = 6
learning_rate: float = 5e-4 weight_decay: float = 0.012 warmup_steps = 40
lr_scheduler_type: str = "cosine" max_grad_norm: float = 0.5 weight_decay: float = 0.012 epochs: int = 5 gradient_accumulation: int = 8

LoRA: lora_r: 8 lora_alpha: 16 dropout: 0.1 attention_layers: "q", "k", "v", "o"

  • Training regime: fp16: False # Disable mixed precision bf16: False # Disable bfloat16 no_cuda: True # Ensure CUDA isn't used use_mps_device: True # utilise MPS with pytorch

Framework versions

  • PEFT 0.14.0
Downloads last month
21
Safetensors
Model size
248M params
Tensor type
F32
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for m1k3wn/nidra-v2

Finetuned
(672)
this model

Space using m1k3wn/nidra-v2 1