ycliu0703's picture
Upload README.md
5b58505 verified
---
license: mit
---
---
library_name: peft
---
## Model info
- Base model: Llama-3-8B
- Training method: Instruction Fine-tuning + LoRA
- Task: Sentiment Analysis
## Packages
``` python
!pip install transformers==4.40.1 peft==0.4.0
!pip install sentencepiece
!pip install accelerate
!pip install torch
!pip install peft
!pip install datasets
!pip install bitsandbytes
```
## Inference: Try the model in Google Colab
``` python
from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM, LlamaTokenizerFast
from peft import PeftModel # 0.5.0
import torch
# Load Models
base_model = "meta-llama/Meta-Llama-3-8B"
peft_model = "FinGPT/fingpt-mt_llama3-8b_lora"
tokenizer = LlamaTokenizerFast.from_pretrained(base_model, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
model = LlamaForCausalLM.from_pretrained(base_model, trust_remote_code=True, device_map = "cuda:0")
model = PeftModel.from_pretrained(model, peft_model)
model = model.eval()
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
tokens = tokenizer(prompt, return_tensors='pt', padding=True, max_length=512).to(device)
res = model.generate(**tokens, max_length=512)
res_sentences = [tokenizer.decode(i) for i in res]
out_text = [o.split("Answer: ")[1] for o in res_sentences]
# Make prompts
prompt = [
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs .
Answer: ''',
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing .
Answer: '''
]
# Show results
for sentiment in out_text:
print(sentiment)
```
## Training Script: [Our Code](https://github.com/AI4Finance-Foundation/FinGPT/blob/master/FinGPT_%20Training%20with%20LoRA%20and%20Meta-Llama-3-8B.ipynb)
```
## Training Data:
* https://huggingface.co/datasets/FinGPT/fingpt-sentiment-train
- PEFT 0.5.0