paul-stansifer/qw-us-gemma2-9b-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from paul-stansifer/qw-us-gemma2-9b
via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora qw-us-gemma2-9b-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora qw-us-gemma2-9b-q8_0.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for paul-stansifer/qw-us-gemma2-oldstyle-9b-adapter
Base model
google/gemma-2-9b
Quantized
unsloth/gemma-2-9b-bnb-4bit
Finetuned
paul-stansifer/qw-us-gemma2-9b