starling-lm-7b-alpha-gguf
starling-lm-7b-alpha-gguf is a GGUF Q4_K_M int4 quantized version of Berkeley Nest's popular finetune of mistral, providing a very fast, very small inference implementation.
starling-lm-7b-alpha-gguf is a leading chat finetuned version of mistral 7b.
Model Description
- Developed by: berkeley-nest
- Quantized by: llmware
- Model type: mistral-7b
- Parameters: 7 billion
- Model Parent: berkeley-nest/Starling-LM-7B-alpha
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: General purpose chat
- RAG Benchmark Accuracy Score: NA
- Quantization: int4
Model Card Contact
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API has been turned off for this model.
Model tree for llmware/starling-lm-7b-alpha-gguf
Base model
berkeley-nest/Starling-LM-7B-alpha