Error running on llama cpp python
#7 opened 6 months ago
by
celsowm
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63a278c3f30c4642278d4259/W0U2_asElVWplHF6sLsDf.png)
Loading gguf model for inference
1
#6 opened 8 months ago
by
Rasi1610
![](https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/l_6hbe5-036PQt0KOT4R4.jpeg)
Llama.cpp server support
3
#5 opened 9 months ago
by
vigneshR
Latest llama.cpp (b3051) complains of missing pre-tokenizer file on these quants
#4 opened 9 months ago
by
Inego
Does not work /:
10
#3 opened 9 months ago
by
erikpro007
Can you provide the template?
6
#2 opened 9 months ago
by
yanghan111
can you provide F16.gguf ?
5
#1 opened 9 months ago
by
praymich