Text Generation
Transformers
PyTorch
Safetensors
gpt2
stable-diffusion
prompt-generator
distilgpt2
text-generation-inference
Inference Endpoints
FredZhang7 commited on
Commit
b0df11a
·
1 Parent(s): c4330c7

Fixed typo

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ tokenizer.max_len = 512
37
  # load the fine-tuned model
38
  import torch
39
  model = GPT2LMHeadModel.from_pretrained('distilgpt2')
40
- model.load_state_dict(torch.load('distil-sd-gpt2.pt.pt'))
41
 
42
  # generate text using fine-tuned model
43
  from transformers import pipeline
 
37
  # load the fine-tuned model
38
  import torch
39
  model = GPT2LMHeadModel.from_pretrained('distilgpt2')
40
+ model.load_state_dict(torch.load('distil-sd-gpt2.pt'))
41
 
42
  # generate text using fine-tuned model
43
  from transformers import pipeline