use this :
from transformers import pipeline,GPT2Tokenizer
test_input = "ØÛ•زم"
tokenizer = GPT2Tokenizer.from_pretrained("bayandorian/gpt2-kurdish")
generator = pipeline('text-generation', model='bayandorian/gpt2-kurdish',device="cuda")
sample = generator(test_input, temperature=1.2, top_k=50, top_p=0.9, repetition_penalty=1.2, pad_token_id = tokenizer.eos_token_id)
print(sample)
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support