Model Details
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the screevoai/abbvie dataset.
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.4318 | 7 | 1100 | 1.4409 |
Libraries to Install
- pip install transformers datasets safetensors huggingface-hub accelerator
- pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
Authentication needed before running the script
Run the following command in the terminal/jupyter_notebook:
Terminal: huggingface-cli login
Jupyter_notebook:
>>> from huggingface_hub import notebook_login >>> notebook_login()
NOTE: Copy and Paste the token from your Huggingface Account Settings > Access Tokens > Create a new token / Copy the existing one.
Script
>>>from datasets import load_dataset
>>>from transformers import AutoModelForCausalLM, AutoTokenizer
>>> # Load model and Tokenizer
>>> model = AutoModelForCausalLM.from_pretrained("screevoai/abbvie-llama2-7b", device_map = "auto")
>>> tokenizer = AutoTokenizer.from_pretrained("screevoai/abbvie-llama2-7b")
>>> tokenizer.padding_side='right'
>>> tokenizer.pad_token = tokenizer.eos_token
>>> # Load the dataset
>>> ds = load_dataset("screevoai/abbvie", split="test", use_auth_token=True)
>>> sample_prompt = ds["Prompt"][0] # change the row number for testing different prompts
>>> # Generate answer to the prompt using the model
>>> encoded_input = tokenizer(sample_prompt, return_tensors="pt", add_special_tokens=True)
>>> model_inputs = encoded_input.to('auto')
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=500, do_sample=True, pad_token_id=tokenizer.eos_token_id)
>>> decoded_output = tokenizer.batch_decode(generated_ids)
>>> print(decoded_output[0].replace(prompt, ""))
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for screevoai/abbvie-llama2-7b
Base model
meta-llama/Llama-2-7b-hf