Configuration Parsing Warning: In config.json: "quantization_config.bits" must be greater than or equal to 2

This is a 1.0 bpw quantized version of Qwen/QwQ-32B-Preview made with exllamav2.

License

This model is available under the Apache 2.0 License.

Discord Server

Join our Discord server here.

Feeling Generous? 😊

Eager to buy me a cup of 2$ coffee or iced tea?πŸ΅β˜• Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?

Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for DrNicefellow/Qwen-QwQ-32B-Preview-1.0bpw-exl2

Base model

Qwen/Qwen2.5-32B
Quantized
(116)
this model

Collection including DrNicefellow/Qwen-QwQ-32B-Preview-1.0bpw-exl2