teknium-open-hermes-2.5-mistral-gguf

teknium-open-hermes-2.5-mistral-gguf is a GGUF Q4_K_M int4 quantized version of teknium's popular open hermes finetune of mistral, providing a very fast, very small inference implementation.

teknium-open-hermes-2.5-mistral is a leading chat finetuned version of mistral 7b.

Model Description

  • Developed by: teknium
  • Quantized by: llmware
  • Model type: mistral-7b
  • Parameters: 7 billion
  • Model Parent: teknium/OpenHermes-2.5-Mistral-7B
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: General purpose chat
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
0
GGUF
Model size
7.24B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for llmware/openhermes-2.5-mistral-7b-gguf

Quantized
(42)
this model