--- license: apache-2.0 datasets: - openai/gsm8k base_model: - facebook/opt-6.7b pipeline_tag: text2text-generation --- # Model Card for Model ID: AWQ-OPT-6.7B ## Model Details ### Model Description This is a model developed based on Facebook's OPT-6.7B, utilizing the Adaptive Weight Quantization (AWQ) method. The model optimizes inference performance and memory usage by merging quantization layers with the original model. It was calibrated using a random sample of 50 entries from the GBS8K dataset, with a sequence length of 512 and a step size of 0.02. ### Developed by jiangchengchengNLP ### Funded by [optional] AWQ work and Meta's model ### Shared by [optional] No one ### Model type Transformer-based language model ### Language(s) (NLP) Supports multiple languages for natural language processing tasks. ### License Apache License 2.0 ## Finetuned from model [optional] OPT-6.7B ## Model Sources [optional] - Repository: https://huggingface.co/facebook/opt-6.7b ## How to Get Started with the Model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "jiangchengchengNLP/opt-6.7B-AWQ-8INT" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) input_text = "Hello, I'm am conscious and" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() generated_ids = model.generate(input_ids) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"] ``` ### calibration Data The model was fine-tuned using a random sample of 50 entries from the GBS8K dataset. ### Hardware Type Cloud-based GPUs (e.g., NVIDIA V100 or A100) ### Software PyTorch, Transformers library from Hugging Face