Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,8 @@ This dataset has Wikipedia articles pertaining to Indian context.
|
|
24 |
### Dataset Description
|
25 |
|
26 |
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
-
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English articles by keywords.
|
28 |
-
Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles.
|
29 |
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
|
30 |
|
31 |
|
@@ -41,8 +41,8 @@ The dataset is focussed on Indian factual content for pre-training LLMs where In
|
|
41 |
## Dataset Structure
|
42 |
|
43 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
44 |
-
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
45 |
-
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script"
|
46 |
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
47 |
|
48 |
## Dataset Creation
|
|
|
24 |
### Dataset Description
|
25 |
|
26 |
<!-- Provide a longer summary of what this dataset is. -->
|
27 |
+
We filtered out Indian context data from [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset's English articles by keywords.
|
28 |
+
Further we trained a model to classify for Indian content vs Not Indian content to narrow down filtered English articles.
|
29 |
We then translated these artices to 6 Indian languages (Hindi, Bengali, Gujarati, Tamil, Kannada, Urdu) using AI4Bharat's [IndicTrans2](https://huggingface.co/ai4bharat/indictrans2-en-indic-1B). The dataset has been cleaned and can be used for pre-training multilingual LLMs.
|
30 |
|
31 |
|
|
|
41 |
## Dataset Structure
|
42 |
|
43 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
44 |
+
Each row corresponds to a wikipedia article with the decription of article in source language(english) and translations in 6 indian languages.
|
45 |
+
The title is in english and descriptions in different languages is represented by column name of format "language_code"_"script".
|
46 |
Each description column in different languages is a list of sentences/multiple sentences and can be concatenated to get cleaned article decription.
|
47 |
|
48 |
## Dataset Creation
|