emrulphy commited on
Commit
f3ae8a0
1 Parent(s): 34f43bc
Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ ---
8
+
9
+ # Model Card: POLLCHECK/Pollcheck-llama3-news-classifier
10
+
11
+ ## Model Details
12
+
13
+ **Model Name:** POLLCHECK/Pollcheck-llama3-news-classifier
14
+
15
+ **Model Description:** This is a fine-tuned llama3 model for news classification e.g. "biased" or "unbiased". In this particular task, the term 'biased' represents disinformation, propaganda, loaded language, negative associations, generalization, harm, hatred, satire
16
+ whereas 'unbiased' represents real news without the spread of misinformation, disinformation, and propaganda. The model can be used to identify potential bias in text, which is useful for applications in media analysis, content moderation, and research on bias in written communication.
17
+
18
+ **Base Model:** "meta-llama/Meta-Llama-3-8B-Instruct"
19
+
20
+ **Fine-tuned Dataset:** The model was fine-tuned on a custom dataset annotated for bias detection, particularly, news articles related to politics.
21
+ Details of the dataset and the fine-tuning process are available upon request.
22
+
23
+ **Labels:**
24
+ -0 or `biased` (fake news)
25
+ - 1 or `unbiased` (real news)
26
+
27
+ ## Intended Use
28
+
29
+ This model is intended for use in identifying biased in text. Users can input a piece of text and receive a prediction indicating whether the text is biased or unbiased.
30
+
31
+ ### Class-wise Performance Metrics
32
+
33
+ | Class | Prec | Recall | F1 |
34
+ |----------|------|--------|-----|
35
+ | Biased | 0.93 | 0.20 | 0.33|
36
+ | Unbiased | 0.58 | 0.99 | 0.73|
37
+ | Overall | 0.75 | 0.59 | 0.53|
38
+
39
+ ## How to Use
40
+
41
+ To use this model for inference, follow the steps below:
42
+
43
+ ### Inference Code
44
+
45
+ ```python
46
+ import torch
47
+ from trl import setup_chat_format
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM
49
+
50
+ # Load the fine-tuned model and tokenizer
51
+ model_name = "POLLCHECK/Pollcheck-llama3-news-classifier" # Change this to the path of your fine-tuned model
52
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
54
+
55
+ # make the model and tokenizer to chat formet
56
+ model, tokenizer = setup_chat_format(model, tokenizer)
57
+
58
+ ###
59
+
60
+ instruction=f"""You are a news classifier AI assistant. You are given with the headline and news article body.
61
+ Your task is to read the headline and news articles, and classify the articles as biased on unbiased. Also provide the confidence score for your labels.In this particular task, the term 'biased' represents disinformation, propaganda, loaded language, negative associations, generalization, harm, hatred, satire
62
+ whereas 'unbiased' represents real news without spread of misinformation, disinformation, and propaganda."""
63
+ headline="<Headline of the new article>"
64
+ article="<Article text body>"
65
+ messages = [
66
+ {"role": "user", "content": f"""{instruction}\nHeadline: {headline}\narticle: {article}\
67
+ Return you answers in the following format.
68
+ 1. Labels: [biased/unbiased]
69
+ 2. Confidence: """}]
70
+
71
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
72
+
73
+ inputs = tokenizer(prompt, return_tensors='pt', padding=True, truncation=True).to("cuda")
74
+ terminators = [tokenizer.eos_token_id,tokenizer.convert_tokens_to_ids("<|eot_id|>")]
75
+
76
+ outputs = model.generate(**inputs, max_new_tokens=30, eos_token_id=terminators, do_sample=True, temperature=0.7, top_p=0.9,)
77
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
78
+ print(response)
79
+
80
+ ```
81
+
82
+ <!-- ### Example Output
83
+
84
+ For the provided sample texts, the model might output:
85
+
86
+ ```
87
+ Text: Religious Extremists Threaten Our Way of Life.
88
+ Predicted label: biased (Biased Probability: 0.95, Unbiased Probability: 0.05)
89
+
90
+ Text: Public Health Officials are working.
91
+ Predicted label: unbiased (Biased Probability: 0.10, Unbiased Probability: 0.90)
92
+
93
+ Text: The new healthcare policy aims to provide affordable healthcare to all citizens, with a focus on preventive care.
94
+ Predicted label: unbiased (Biased Probability: 0.20, Unbiased Probability: 0.80)
95
+
96
+ Text: Environmental activists argue that the government's refusal to sign the climate agreement is a clear indication of its disregard for the environment.
97
+ Predicted label: biased (Biased Probability: 0.70, Unbiased Probability: 0.30)
98
+ ``` -->
99
+
100
+ <!-- Check inference with these paths:
101
+
102
+ - Sample Data: [News_Bias_Samples.csv](https://huggingface.co/POLLCHECK/BERT-classifier/blob/main/News_Bias_Samples.csv)
103
+ - Inference Script: [inference-bert.py](https://huggingface.co/POLLCHECK/BERT-classifier/blob/main/inference-bert.py) -->
104
+
105
+
106
+ ## Limitations and Bias
107
+
108
+ - **Dataset Bias:** The model's performance is highly dependent on the quality and diversity of the fine-tuning dataset. Biases present in the dataset will affect the model's predictions.
109
+ - **Context:** The model may not perform well on texts that are out of the distribution of the training data or on texts that require nuanced understanding of context.
110
+
111
+ ## Ethical Considerations
112
+
113
+ - **Fairness:** Ensure that the model is used in a fair and unbiased manner. Regularly evaluate the model's performance and address any biases that may arise.
114
+ - **Transparency:** Be transparent about the model's limitations and the potential for false positives and false negatives.
115
+ - **Accountability:** Users are responsible for the decisions made based on the model's predictions and should consider multiple sources of information when making important decisions.
116
+
117
+ ## Contact Information
118
+
119
+ For questions, comments, or suggestions, please contact Shaina Raza at [email protected].