Edit model card

Model Card for Model ID: AWQ-OPT-6.7B

Model Details

Model Description

This is a model developed based on Facebook's OPT-6.7B, utilizing the Adaptive Weight Quantization (AWQ) method. The model optimizes inference performance and memory usage by merging quantization layers with the original model. It was calibrated using a random sample of 50 entries from the GBS8K dataset, with a sequence length of 512 and a step size of 0.02.

Developed by

jiangchengchengNLP

Funded by [optional]

AWQ work and Meta's model

Shared by [optional]

No one

Model type

Transformer-based language model

Language(s) (NLP)

Supports multiple languages for natural language processing tasks.

License

Apache License 2.0

Finetuned from model [optional]

OPT-6.7B

Model Sources [optional]

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "jiangchengchengNLP/opt-6.7B-AWQ-8INT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

input_text = "Hello, I'm am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()

generated_ids = model.generate(input_ids)

tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]

calibration Data

The model was fine-tuned using a random sample of 50 entries from the GBS8K dataset.

Hardware Type

Cloud-based GPUs (e.g., NVIDIA V100 or A100)

Software

PyTorch, Transformers library from Hugging Face

Downloads last month
0
Safetensors
Model size
6.87B params
Tensor type
BF16
·
I8
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for jiangchengchengNLP/opt-6.7B-AWQ-8INT

Base model

facebook/opt-6.7b
Finetuned
(6)
this model

Dataset used to train jiangchengchengNLP/opt-6.7B-AWQ-8INT