Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -25,10 +25,107 @@ The dataset consists of the following columns:
|
|
25 |
| `Label` | Like count range category |
|
26 |
| `Count` | Number of tweets in the like count range category |
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## Usage
|
29 |
|
30 |
This dataset can be used for various research purposes, including sentiment analysis, trend analysis, and event impact studies related to the Israel-Palestine conflict.
|
31 |
For questions or feedback, please contact:
|
32 |
|
33 |
- **Name:** Mehyar Mlaweh
|
34 |
-
- **Email:** [email protected]
|
|
|
25 |
| `Label` | Like count range category |
|
26 |
| `Count` | Number of tweets in the like count range category |
|
27 |
|
28 |
+
|
29 |
+
|
30 |
+
|
31 |
+
## How to Process the Data
|
32 |
+
|
33 |
+
To process the dataset, you can use the following Python code. This code reads the CSV file, cleans the tweets, tokenizes and lemmatizes the text, and filters out non-English tweets.
|
34 |
+
|
35 |
+
### Required Libraries
|
36 |
+
|
37 |
+
Make sure you have the following libraries installed:
|
38 |
+
|
39 |
+
```bash
|
40 |
+
pip install pandas nltk langdetect
|
41 |
+
```
|
42 |
+
|
43 |
+
## Data Processing Code
|
44 |
+
|
45 |
+
Here’s the code to process the tweets:
|
46 |
+
|
47 |
+
```python
|
48 |
+
import pandas as pd
|
49 |
+
import re
|
50 |
+
from nltk.tokenize import word_tokenize
|
51 |
+
from nltk.corpus import stopwords
|
52 |
+
from nltk.stem import WordNetLemmatizer
|
53 |
+
from langdetect import detect, LangDetectException
|
54 |
+
|
55 |
+
# Define the TweetProcessor class
|
56 |
+
class TweetProcessor:
|
57 |
+
def __init__(self, file_path):
|
58 |
+
"""
|
59 |
+
Initialize the object with the path to the CSV file.
|
60 |
+
"""
|
61 |
+
self.df = pd.read_csv(file_path)
|
62 |
+
# Convert 'text' column to string type
|
63 |
+
self.df['text'] = self.df['text'].astype(str)
|
64 |
+
|
65 |
+
def clean_tweet(self, tweet):
|
66 |
+
"""
|
67 |
+
Clean a tweet by removing links, special characters, and extra spaces.
|
68 |
+
"""
|
69 |
+
# Remove links
|
70 |
+
tweet = re.sub(r'https\S+', '', tweet, flags=re.MULTILINE)
|
71 |
+
# Remove special characters and numbers
|
72 |
+
tweet = re.sub(r'\W', ' ', tweet)
|
73 |
+
# Replace multiple spaces with a single space
|
74 |
+
tweet = re.sub(r'\s+', ' ', tweet)
|
75 |
+
# Remove leading and trailing spaces
|
76 |
+
tweet = tweet.strip()
|
77 |
+
return tweet
|
78 |
+
|
79 |
+
def tokenize_and_lemmatize(self, tweet):
|
80 |
+
"""
|
81 |
+
Tokenize and lemmatize a tweet by converting to lowercase, removing stopwords, and lemmatizing.
|
82 |
+
"""
|
83 |
+
# Tokenize the text
|
84 |
+
tokens = word_tokenize(tweet)
|
85 |
+
# Remove punctuation and numbers, and convert to lowercase
|
86 |
+
tokens = [word.lower() for word in tokens if word.isalpha()]
|
87 |
+
# Remove stopwords
|
88 |
+
stop_words = set(stopwords.words('english'))
|
89 |
+
tokens = [word for word in tokens if word not in stop_words]
|
90 |
+
# Lemmatize the tokens
|
91 |
+
lemmatizer = WordNetLemmatizer()
|
92 |
+
tokens = [lemmatizer.lemmatize(word) for word in tokens]
|
93 |
+
# Join tokens back into a single string
|
94 |
+
return ' '.join(tokens)
|
95 |
+
|
96 |
+
def process_tweets(self):
|
97 |
+
"""
|
98 |
+
Apply cleaning and lemmatization functions to the tweets in the DataFrame.
|
99 |
+
"""
|
100 |
+
def lang(x):
|
101 |
+
try:
|
102 |
+
return detect(x) == 'en'
|
103 |
+
except LangDetectException:
|
104 |
+
return False
|
105 |
+
|
106 |
+
# Filter tweets for English language
|
107 |
+
self.df = self.df[self.df['text'].apply(lang)]
|
108 |
+
|
109 |
+
# Apply cleaning function
|
110 |
+
self.df['cleaned_text'] = self.df['text'].apply(self.clean_tweet)
|
111 |
+
# Apply tokenization and lemmatization function
|
112 |
+
self.df['tokenized_and_lemmatized'] = self.df['cleaned_text'].apply(self.tokenize_and_lemmatize)
|
113 |
+
|
114 |
+
```
|
115 |
+
|
116 |
+
Feel free to add or modify any details according to your specific requirements!
|
117 |
+
|
118 |
+
Let me know if there’s anything else you’d like to adjust or add!
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
## Usage
|
126 |
|
127 |
This dataset can be used for various research purposes, including sentiment analysis, trend analysis, and event impact studies related to the Israel-Palestine conflict.
|
128 |
For questions or feedback, please contact:
|
129 |
|
130 |
- **Name:** Mehyar Mlaweh
|
131 |
+
- **Email:** [email protected]
|