Threatthriver
commited on
Commit
•
d352196
1
Parent(s):
e8735f8
Update README.md
Browse files
README.md
CHANGED
@@ -1,43 +1,42 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
- question-answering
|
5 |
-
- text-classification
|
6 |
-
- text-generation
|
7 |
-
---
|
8 |
-
# Hindi Web Content Dataset
|
9 |
-
## Overview
|
10 |
This dataset contains a collection of Hindi text data scraped from various websites. The data was collected using a domain-restricted scraper that extracts paragraphs of text from specified domains. The dataset includes content from news articles, literature, and other web pages. The scraped text has been stored in JSON format and is intended for use in natural language processing (NLP) tasks, such as language modeling, text generation, and sentiment analysis.
|
11 |
-
|
12 |
-
*
|
13 |
-
|
14 |
-
*
|
15 |
-
*
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
+ [NDTV Hindi](https://ndtv.in)
|
17 |
+ [Jansatta](https://www.jansatta.com)
|
18 |
+ [Hindwi](https://www.hindwi.org)
|
19 |
-
|
20 |
-
The dataset is stored in a JSON file named scraped_data.json
|
21 |
-
* url
|
22 |
-
* title
|
23 |
-
* paragraphs
|
24 |
-
|
25 |
-
|
26 |
```json
|
27 |
{
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
}
|
36 |
```
|
37 |
-
|
38 |
-
|
39 |
-
You can load the JSON file using standard Python libraries such as json or use data processing libraries like pandas
|
40 |
-
```
|
41 |
import json
|
42 |
with open('scraped_data.json', 'r', encoding='utf-8') as file:
|
43 |
data = json.load(file)
|
@@ -50,15 +49,17 @@ for paragraph in paragraphs:
|
|
50 |
all_paragraphs.append(paragraph)
|
51 |
# Now all_paragraphs contains all the paragraphs concatenated
|
52 |
```
|
53 |
-
Loading the Dataset with Hugging Face
|
54 |
-
You can also load this dataset directly using the Hugging Face datasets library. The dataset is available under the identifier Threatthriver/Hindi-story-news
|
55 |
-
Example Code
|
|
|
56 |
from datasets import load_dataset
|
57 |
ds = load_dataset("Threatthriver/Hindi-story-news")
|
58 |
-
This will load the dataset into a datasets.Dataset object, which you can then use for various NLP tasks.
|
59 |
-
|
|
|
60 |
The data scraped from the mentioned websites is subject to their respective terms of use and copyright policies. Users of this dataset must ensure that their use complies with these terms and respects the intellectual property rights of the content owners.
|
61 |
-
Acknowledgements
|
62 |
We acknowledge the efforts of the content creators and website owners for providing valuable information in the Hindi language. Their contributions are invaluable for advancing NLP research and applications in regional languages.
|
63 |
-
Contact
|
64 |
For any questions or issues regarding this dataset, please feel free to reach out to us.
|
|
|
1 |
+
**Hindi Web Content Dataset**
|
2 |
+
===========================
|
3 |
+
### Overview
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
This dataset contains a collection of Hindi text data scraped from various websites. The data was collected using a domain-restricted scraper that extracts paragraphs of text from specified domains. The dataset includes content from news articles, literature, and other web pages. The scraped text has been stored in JSON format and is intended for use in natural language processing (NLP) tasks, such as language modeling, text generation, and sentiment analysis.
|
5 |
+
### License
|
6 |
+
* **MIT**
|
7 |
+
### Task Categories
|
8 |
+
* **Question-Answering**
|
9 |
+
* **Text-Classification**
|
10 |
+
* **Text-Generation**
|
11 |
+
### Dataset Details
|
12 |
+
* **Size**: At least 30MB of text data
|
13 |
+
* **Language**: Hindi
|
14 |
+
* **Format**: JSON
|
15 |
+
* **Source Domains**:
|
16 |
+ [NDTV Hindi](https://ndtv.in)
|
17 |
+ [Jansatta](https://www.jansatta.com)
|
18 |
+ [Hindwi](https://www.hindwi.org)
|
19 |
+
### Structure of the Dataset
|
20 |
+
The dataset is stored in a JSON file named `scraped_data.json`. Each entry in the JSON file corresponds to a web page and contains the following fields:
|
21 |
+
* **url**: The URL of the web page.
|
22 |
+
* **title**: The title of the web page.
|
23 |
+
* **paragraphs**: A list of paragraphs extracted from the web page.
|
24 |
+
#### Example Entry
|
|
|
25 |
```json
|
26 |
{
|
27 |
+
"url": "https://example.com/article",
|
28 |
+
"title": "Example Article Title",
|
29 |
+
"paragraphs": [
|
30 |
+
"This is the first paragraph of the article.",
|
31 |
+
"This is the second paragraph of the article.",
|
32 |
+
// More paragraphs...
|
33 |
+
]
|
34 |
}
|
35 |
```
|
36 |
+
### How to Use
|
37 |
+
#### Load the Dataset
|
38 |
+
You can load the JSON file using standard Python libraries such as `json` or use data processing libraries like `pandas`.
|
39 |
+
```python
|
40 |
import json
|
41 |
with open('scraped_data.json', 'r', encoding='utf-8') as file:
|
42 |
data = json.load(file)
|
|
|
49 |
all_paragraphs.append(paragraph)
|
50 |
# Now all_paragraphs contains all the paragraphs concatenated
|
51 |
```
|
52 |
+
#### Loading the Dataset with Hugging Face
|
53 |
+
You can also load this dataset directly using the Hugging Face datasets library. The dataset is available under the identifier `Threatthriver/Hindi-story-news`.
|
54 |
+
Example Code:
|
55 |
+
```python
|
56 |
from datasets import load_dataset
|
57 |
ds = load_dataset("Threatthriver/Hindi-story-news")
|
58 |
+
# This will load the dataset into a datasets.Dataset object, which you can then use for various NLP tasks.
|
59 |
+
```
|
60 |
+
### License
|
61 |
The data scraped from the mentioned websites is subject to their respective terms of use and copyright policies. Users of this dataset must ensure that their use complies with these terms and respects the intellectual property rights of the content owners.
|
62 |
+
### Acknowledgements
|
63 |
We acknowledge the efforts of the content creators and website owners for providing valuable information in the Hindi language. Their contributions are invaluable for advancing NLP research and applications in regional languages.
|
64 |
+
### Contact
|
65 |
For any questions or issues regarding this dataset, please feel free to reach out to us.
|