AhmedSSoliman commited on
Commit
673a0b0
·
1 Parent(s): 6336d7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -13,20 +13,20 @@ This model is to improve the solving of the code generation problem and implemen
13
  CoNaLa Dataset for Code Generation is available at
14
  https://huggingface.co/datasets/AhmedSSoliman/CoNaLa
15
 
16
- This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-NL-to-Code
17
  ```python
18
  # Model and Tokenizer
19
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
20
  # model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
21
- model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-NL-to-Code")
22
- tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-NL-to-Code")
23
  # Input (Natural Language) and Output (Python Code)
24
  NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
25
  output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
26
  output_code = tokenizer.decode(output[0], skip_special_tokens=True)
27
  ```
28
 
29
- This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-NL-to-Code
30
 
31
 
32
  ---
 
13
  CoNaLa Dataset for Code Generation is available at
14
  https://huggingface.co/datasets/AhmedSSoliman/CoNaLa
15
 
16
+ This is the model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-CoNaLa
17
  ```python
18
  # Model and Tokenizer
19
  from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
20
  # model_name = "AhmedSSoliman/MarianCG-NL-to-Code"
21
+ model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
22
+ tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-CoNaLa")
23
  # Input (Natural Language) and Output (Python Code)
24
  NL_input = "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"
25
  output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt"))
26
  output_code = tokenizer.decode(output[0], skip_special_tokens=True)
27
  ```
28
 
29
+ This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-CoNaLa
30
 
31
 
32
  ---