stefanoscotta's picture
Update README.md
bf63efa verified
|
raw
history blame
5.16 kB
---
license: unknown
datasets:
- raicrits/YouTube_RAI_dataset
language:
- it
pipeline_tag: text2text-generation
tags:
- LLM
- Italian
- LoRa
- Classification
- LLama3
- Topics
library_name: transformers, peft
---
---
# Model Card raicrits/Llama3_ChangeOfTopic
<!-- Provide a quick summary of what the model is/does. -->
LoRa adapters for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) obtained through a finetuning process (using LoRA technique) aimed at making the model capable of detecting
a change of topic in a given text.
### Model Description
The model resulting from the application of the adapters in this repository to the base model [meta-llama/MMeta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) is optimized to perform the
specific task of detecting a change of topic in a given text. Given a text the model answers with "1" in the case that it detects a change of topic and "0" otherwise.
The training has been done using the chapters in the Youtube videos contained in the train split of the dataset [raicrits/YouTube_RAI_dataset](https://huggingface.co/meta-llama/raicrits/YouTube_RAI_dataset).
Because of the finetuning process it is important to respect the prompt template in order to get good results.
- **Developed by:** Stefano Scotta ([email protected])
- **Model type:** LLM finetuned on the specific task of detect a change of topic in a given text
- **Language(s) (NLP):** Italian
- **License:** unknown
- **Finetuned from model [optional]:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
## Uses
The model can be used to check if in a given text occurs a change of topic or not.
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Bias, Risks, and Limitations
As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers.
## How to Get Started with the Model
Use the code below to get started with the model.
**Usage:**
Use the code below to get started with the model.
``` python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
model_id = "meta-llama/Meta-Llama-3-8B"
lora_id = "raicrits/Llama3_ChangeOfTopic"
quantization_config = BitsAndBytesConfig(
load_in_8bit=True)
base_model = AutoModelForCausalLM.from_pretrained(model_id,
quantization_config=quantization_config,
device_map=device)
model = PeftModel.from_pretrained(base_model, lora_id)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
messages = [
{"role": "system", "content": "You are an AI assistant able to detect change of topics in given texts."},
{"role": "user", "content": f"""Analyze the following text written in italian and in case you detect a change of topic answer just with "1", otherwise, if the topic remains the same within all the given text answer just "0". do not add further text.
Text: {'<text>'}"""
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids,
max_new_tokens=1,
eos_token_id=terminators,
do_sample=True,
temperature=0.2
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=False))
```
## Training Details
### Training Data
Chapters in the Youtube videos contained in the train split of the dataset [raicrits/YouTube_RAI_dataset](https://huggingface.co/meta-llama/raicrits/YouTube_RAI_dataset)
### Training Procedure
The fine-tuning procedure was done using [LoRA](https://arxiv.org/abs/2106.09685) approach.
**Training setting:**
- train epochs=1,
- learning_rate=2e-05
- mixed precision training: int8
**LoRA configuration:**
- r= 8
- lora_alpha=16
- target_modules=["q_proj", "k_proj", "v_proj", "o_proj"]
- lora_dropout=0.1
- bias="none"
- task_type=CAUSAL_LM
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 NVIDIA A100/40Gb
- **Hours used:** 45
- **Cloud Provider:** Private Infrastructure
- **Carbon Emitted:** 4.86kg eq. CO2
## Model Card Authors
Stefano Scotta ([email protected])
## Model Card Contact
[email protected]