Spaces:
Running
on
A10G
Running
on
A10G
No support for making GGUF of HuggingFaceTB/SmolVLM-500M-Instruct
1
#148 opened 5 days ago
by
TimexPeachtree
![](https://cdn-avatars.huggingface.co/v1/production/uploads/no-auth/-hTmvLgCD22VOVWP7Wq3L.png)
Unable to convert Senqiao/LISA_Plus_7b
1
#147 opened 11 days ago
by
PlayAI
Unable to convert ostris/Flex.1-alpha
#146 opened 23 days ago
by
fullsoftwares
Crashes on watt-ai/watt-tool-70B
#145 opened 26 days ago
by
ejschwartz
Update app.py
1
#144 opened about 1 month ago
by
gghfez
Unable to convert Phi-3 Vision
#143 opened about 2 months ago
by
venkatsriram
Accessing own private repos
2
#141 opened 2 months ago
by
themex1380
Why can't i login?
5
#139 opened 2 months ago
by
safe049
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64d886a0abf475a808ec638b/HTsMIp3ZvzSPVSiLKxnDI.jpeg)
If generating model cards readmes, consider adding support for these extra authorship parameters
2
#137 opened 3 months ago
by
mofosyne
![](https://cdn-avatars.huggingface.co/v1/production/uploads/643fc7a07bc3fbde1384ec2d/0HACYBZKfJMJJrCyRtHra.png)
Add F16 and BF16 quantization
1
#129 opened 3 months ago
by
andito
![](https://cdn-avatars.huggingface.co/v1/production/uploads/65d66b494bbd0d92b641cdbb/6-7dm7B-JxcoS1QlCPdMN.jpeg)
update readme for card generation
4
#128 opened 4 months ago
by
ariG23498
![](https://cdn-avatars.huggingface.co/v1/production/uploads/608aabf24955d2bfc3cd99c6/T762Ut0Y-w0sZB2ynvfbJ.jpeg)
[bug] asymmetric t5 models fail to quantize
#126 opened 5 months ago
by
pszemraj
![](https://cdn-avatars.huggingface.co/v1/production/uploads/60bccec062080d33f875cd0c/KvEhYxx9-Tff_Qb7PsjAL.png)
[Bug] Extra files with related name were uploaded to the resulting repository
#125 opened 5 months ago
by
Felladrin
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6454aff9273f649830234978/cvVV08YHJpJx9xWVZqgVW.jpeg)
Issue converting PEFT LoRA fine tuned model to GGUF
2
#124 opened 5 months ago
by
AdnanRiaz107
Issue converting nvidia/NV-Embed-v2 to GGUF
#123 opened 5 months ago
by
redshiva
![](https://cdn-avatars.huggingface.co/v1/production/uploads/65d1794abda9379ede9989fa/GthFqSK8MTZ4XNmvqJiy2.png)
Issue converting FLUX.1-dev model to GGUF format
3
#122 opened 5 months ago
by
cbrescia
Add Llama 3.1 license
#121 opened 5 months ago
by
jxtngx
![](https://cdn-avatars.huggingface.co/v1/production/uploads/600988cc099484791c9c4a75/D6poEyTxOIVQ0Z54MEWe2.jpeg)
Add an option to put all quantization variants in the same repo
#120 opened 5 months ago
by
A2va
Phi-3.5-MoE-instruct
6
#117 opened 6 months ago
by
goodasdgood
Fails to quntize T5 (xl and xxl) models
1
#116 opened 6 months ago
by
girishponkiya
Arm optimized quants
1
#113 opened 6 months ago
by
SaisExperiments
DeepseekForCausalLM is not supported
1
#112 opened 6 months ago
by
nanowell
Please, update converting script. Llama.cpp added support for Nemotron and Minitron architectures.
3
#111 opened 6 months ago
by
NikolayKozloff
Enable the created name repo to be without the quantization type
#110 opened 6 months ago
by
A2va
I think I broke the space quantizing 4bit modle with Q4L
#106 opened 7 months ago
by
hellork
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63cbe0c36d080f9b7cf1b691/WafpOcg6_U3408nh_Z73P.jpeg)
Authorship Metadata support added to converter script, you may want to add the ability to add metadata overrides
3
#104 opened 7 months ago
by
mofosyne
![](https://cdn-avatars.huggingface.co/v1/production/uploads/643fc7a07bc3fbde1384ec2d/0HACYBZKfJMJJrCyRtHra.png)
Please support this method:
7
#96 opened 8 months ago
by
ZeroWw
Support Q2 imatrix quants
#95 opened 8 months ago
by
Dampfinchen
Maybe impose a max model size?
3
#33 opened 10 months ago
by
pcuenq
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1617264212503-603d25b75f9d390ab190b777.jpeg)