Abinaya Mahendiran
commited on
Commit
·
752d5c1
1
Parent(s):
0388313
Updated README
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ Pretrained model on Tamil language using a causal language modeling (CLM) object
|
|
25 |
The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/)
|
26 |
|
27 |
## Intended uses & limitations:
|
28 |
-
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=
|
29 |
|
30 |
## How to pretrain the model:
|
31 |
To perform training, do the following steps,
|
|
|
25 |
The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/)
|
26 |
|
27 |
## Intended uses & limitations:
|
28 |
+
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
|
29 |
|
30 |
## How to pretrain the model:
|
31 |
To perform training, do the following steps,
|