Update README.md
Browse files
README.md
CHANGED
@@ -27,101 +27,13 @@ configs:
|
|
27 |
path: data/test-*
|
28 |
---
|
29 |
|
30 |
-
|
31 |
-
Cognitive accessibility - that is, the condition that texts, signs, technology, and pictograms must meet to be easily understood by everyone - is therefore a fundamental right.
|
32 |
-
One approach to making texts cognitively accessible is to adapt them for easy reading; however, this process requires following specific rules and experience in making this type of adaptation.
|
33 |
|
34 |
-
|
35 |
-
|
|
|
|
|
36 |
|
37 |
-
|
38 |
-
Here there is a list with all of the news of the web, so we are getting the links that direct us to those news by searching in the html code.
|
39 |
-
For that, we are using the BeautifulSoup library in Python looking for the "btn btn-secondary" classes, which contain the url of the news.
|
40 |
-
Some of the texts are only in their original version, so we are going to discard them.
|
41 |
-
To do so, we are using again the BeatufiulSoup library and search the "lecturafacil_texto" classs in the html code, since that is the class used to classify the easy-to-read parts.
|
42 |
-
Then, we are going to save that part of the text in an txt file.
|
43 |
-
Moreover, we can find the text in the original version in the "articleBody" div, so we are saving it in another txt file.
|
44 |
-
These two files are the ones that we have uploaded to this dataset.
|
45 |
-
We can see the code below.
|
46 |
|
47 |
-
|
48 |
-
import os
|
49 |
-
from bs4 import BeautifulSoup
|
50 |
-
from urllib.request import urlopen
|
51 |
-
|
52 |
-
urls = []
|
53 |
-
url ="https://www.plenainclusionlarioja.org/actualidad/noticias/"
|
54 |
-
page = urlopen(url).read()
|
55 |
-
html = page.decode("utf-8")
|
56 |
-
soup = BeautifulSoup(html, 'html.parser')
|
57 |
-
mydivs = soup.find_all("a", {'class':"btn btn-secondary"})
|
58 |
-
for i in range(len(mydivs)):
|
59 |
-
nombre = str(mydivs[i]).split('/actualidad/noticias/')[1].split('"')[0]
|
60 |
-
urls.append(url+nombre)
|
61 |
-
|
62 |
-
for enlace in urls:
|
63 |
-
page = urlopen(enlace).read()
|
64 |
-
html = page.decode("utf-8")
|
65 |
-
soup = BeautifulSoup(html, 'html.parser')
|
66 |
-
mydivs = soup.find_all("div", {"class": "lecturafacil_texto"})
|
67 |
-
if str(mydivs)!='[]':
|
68 |
-
mydivsComplejo = soup.find_all("div", {"itemprop": "articleBody"})
|
69 |
-
with open('./lecturaFacil/lf-'+enlace.split('/')[-1]+'.txt', 'w') as f:
|
70 |
-
f.write(str(mydivs))
|
71 |
-
with open('./lecturaCompleja/lc-'+enlace.split('/')[-1]+'.txt', 'w') as f:
|
72 |
-
f.write(str(mydivsComplejo))
|
73 |
-
```
|
74 |
-
|
75 |
-
About the Plena Inclusión España data, we are using the news as well.
|
76 |
-
In this case, they are in the url https://www.plenainclusion.org/noticias.
|
77 |
-
We are getting the links to all the news directed from there using BeautifulSoup, but in this case we have to scroll through the 194 pages and get the links from the "elementor-post__read-more" class.
|
78 |
-
Then, we are looking for the "post-lectura-dificil" class, which has the easy-to-read text, and the "articleBody" class, which has the original text, and we are saving them in two txt files as we can see in the following code.
|
79 |
-
|
80 |
-
```
|
81 |
-
enlaces = []
|
82 |
-
for i in range(194):
|
83 |
-
url = "https://www.plenainclusion.org/noticias/?sf_paged="+str(i)
|
84 |
-
page = urlopen(url).read()
|
85 |
-
html = page.decode("utf-8")
|
86 |
-
soup = BeautifulSoup(html, 'html.parser')
|
87 |
-
mydivs = soup.find_all("a", {'class':"elementor-post__read-more"})
|
88 |
-
for i in range(len(mydivs)):
|
89 |
-
enlaces.append(str(mydivs[i]).split('href="')[1].split('"')[0])
|
90 |
-
|
91 |
-
for en in enlaces:
|
92 |
-
page = urlopen(en).read()
|
93 |
-
html = page.decode("utf-8")
|
94 |
-
soup = BeautifulSoup(html, 'html.parser')
|
95 |
-
mydivs = soup.find_all("p", {'class':"enlace-lectura-dificil"})
|
96 |
-
if str(mydivs)!='[]':
|
97 |
-
nombre = en.split('/')[-2]
|
98 |
-
lf = soup.find_all("section", {"itemprop":"articleBody"})
|
99 |
-
lf = str(lf).split("<figure")
|
100 |
-
texto = lf[0]
|
101 |
-
for i in range(len(lf)-1):
|
102 |
-
if "</figure>" in lf[i+1]:
|
103 |
-
if(len(lf[i+1].split("</figure>"))>2):
|
104 |
-
sinIm = lf[i+1].split("</figure>")[2]
|
105 |
-
else:
|
106 |
-
sinIm = lf[i+1].split("</figure>")[1]
|
107 |
-
texto = texto + sinIm
|
108 |
-
lc = soup.find_all("section", {"class":"post-lectura-dificil"})
|
109 |
-
with open('./plenaInclusionEspaña/lecturaFacil/lf-'+nombre+'.txt', 'w') as f:
|
110 |
-
f.write(str(lf[0]))
|
111 |
-
with open('./plenaInclusionEspaña/lecturaCompleja/lc-'+nombre+'.txt', 'w') as f:
|
112 |
-
f.write(str(lc[0]))
|
113 |
-
else:
|
114 |
-
nombre = en.split('/')[-2]
|
115 |
-
lf = soup.find_all("section", {"itemprop":"articleBody"})
|
116 |
-
lf = str(lf).split("<figure")
|
117 |
-
texto = lf[0]
|
118 |
-
for i in range(len(lf)-1):
|
119 |
-
if "</figure>" in lf[i+1]:
|
120 |
-
if(len(lf[i+1].split("</figure>"))>2):
|
121 |
-
sinIm = lf[i+1].split("</figure>")[2]
|
122 |
-
else:
|
123 |
-
sinIm = lf[i+1].split("</figure>")[1]
|
124 |
-
texto = texto + sinIm
|
125 |
-
with open('./plenaInclusionEspaña/soloFacil/' + nombre + '.txt', 'w') as f:
|
126 |
-
f.write(texto)
|
127 |
-
```
|
|
|
27 |
path: data/test-*
|
28 |
---
|
29 |
|
30 |
+
# Dataset Card for easy-to-read translation
|
|
|
|
|
31 |
|
32 |
+
## Table of Contents
|
33 |
+
-[Table of Contents](#table-of-contents)
|
34 |
+
-[Dataset Description](#dataset-description)
|
35 |
+
-[Dataset Summary](#dataset-summary)
|
36 |
|
37 |
+
## Dataset Description
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
- **Homepage:** [https://www.plenainclusion.org/, https://www.plenainclusionlarioja.org/](https://www.plenainclusion.org/, https://www.plenainclusionlarioja.org/)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|