File size: 912 Bytes
cd379b8
 
 
76b5d71
82ea5df
76b5d71
82ea5df
cd379b8
 
 
 
76b5d71
cd379b8
 
 
 
 
 
4594a2a
 
 
 
a81d11b
4594a2a
e261a48
 
cd379b8
4594a2a
a9682cf
 
8641397
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
dataset_info:
  features:
  - name: english
    dtype: string
  - name: kurdish
    dtype: string
  splits:
  - name: train
    num_bytes: 49594900
    num_examples: 148844
  download_size: 25408908
  dataset_size: 49594900
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- translation
language:
- ku
- en
pretty_name: Kurdish - English Sentences
size_categories:
- 100K<n<1M
---

## Summary

Extracted subset from [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100) and reshaped it into 2 columns. Note: noticed some low quality pairs. It would be a good project to classify and select high quality pairs.

## Usage

```python
from datasets import load_dataset

ds = load_dataset("nazimali/kurdish-english-opus-100", split="train")
ds
```

```python
Dataset({
    features: ['english', 'kurdish'],
    num_rows: 148844
})
```