Purpose of this finetuning

Finetune base model GPT2-IMDB using a using this BERT sentiment classifier as a reward function.

  • The goal is to train the GPT2 model to extrapolate on a movie review and generate negative sentiment.
  • There is a separate training done to generate positive movie reviews. The eventual goal would be to interpolate the weight spaces of the 'positively fintuned' and 'negatively finetuned' models as per the rewarded-soups paper and test if it results in (qualitatively) neutral reviews.

Model Params

Here are the traning parameters

  • base_model ='lvwerra/gpt2-imdb'
  • dataset = stanfordnlp/imdb
  • batch_size = 16
  • learning_rate = 1.41e-5
  • output_max_length = 16
  • output_min_length = 4

Not sure how long it took, but less than a couple hours on a single A6000 GPU

Results

image/png

Downloads last month
19
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for Samzy17/gpt2-imdb-movie-reviews-negative

Base model

lvwerra/gpt2-imdb
Finetuned
(21)
this model

Dataset used to train Samzy17/gpt2-imdb-movie-reviews-negative