Llama-3.1_OpenScholar-8B with AWQ Quantization

This is Llama-3.1_OpenScholar-8B with AWQ Quantization applied using the following code.

Based on this example code.

import torch

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

# Input and output path
path = "OpenScholar/Llama-3.1_OpenScholar-8B"
output = "Llama-3.1_OpenScholar-8B-AWQ"

# Quantization config
config = {
    "zero_point": True,
    "q_group_size": 128,
    "w_bit": 4,
    "version": "GEMM"
}

# Load model
model = AutoAWQForCausalLM.from_pretrained(
    model_path=path,
    low_cpu_mem_usage=True,
    use_cache=False,
    safetensors=False,
    device_map="cuda",
    torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(path)

# Quantize
model.quantize(tokenizer, quant_config=config)

# Save quantized model
model.save_quantized(output)

# Save tokenizer
# Note: Transformers >= 4.45.0 doubles size of tokenizer.json
# See https://github.com/huggingface/transformers/issues/34744
tokenizer.save_pretrained(output)

print(f'Model is quantized and saved to "{output}"')
Downloads last month
342
Safetensors
Model size
1.98B params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for NeuML/Llama-3.1_OpenScholar-8B-AWQ

Quantized
(4)
this model

Collection including NeuML/Llama-3.1_OpenScholar-8B-AWQ