metadata
base_model: anilbhatt1/phi2-oasst-guanaco-bf16-custom
inference: false
license: mit
model_creator: anilbhatt1
model_name: phi2-oasst-guanaco-bf16-custom
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
anilbhatt1/phi2-oasst-guanaco-bf16-custom-GGUF
Quantized GGUF model files for phi2-oasst-guanaco-bf16-custom from anilbhatt1
Name | Quant method | Size |
---|---|---|
phi2-oasst-guanaco-bf16-custom.fp16.gguf | fp16 | 5.56 GB |
phi2-oasst-guanaco-bf16-custom.q2_k.gguf | q2_k | 1.17 GB |
phi2-oasst-guanaco-bf16-custom.q3_k_m.gguf | q3_k_m | 1.48 GB |
phi2-oasst-guanaco-bf16-custom.q4_k_m.gguf | q4_k_m | 1.79 GB |
phi2-oasst-guanaco-bf16-custom.q5_k_m.gguf | q5_k_m | 2.07 GB |
phi2-oasst-guanaco-bf16-custom.q6_k.gguf | q6_k | 2.29 GB |
phi2-oasst-guanaco-bf16-custom.q8_0.gguf | q8_0 | 2.96 GB |
Original Model Card:
Finetuned microsoft-phi2 model
- microsoft-phi2 model finetuned on "timdettmers/openassistant-guanaco" dataset with qlora technique
- Will run on a colab T4 gpu