lgfunderburk commited on
Commit
2b3797c
·
1 Parent(s): 90f5a69

correct grammar

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ The data was split into training and testing: model trained on 90% of the data,
22
 
23
  DistilBERT has a maximum input length of 512, so with this in mind the following was performed:
24
 
25
- 1. I used the`distilbert-base-uncased` pretrained model to initialize an `AutoTokenizer`.
26
  2. Setting a maximum length of 256, each entry in the training, testing and validation data was truncated if it exceeded the limit and padded if it didn't reach the limit.
27
 
28
  ### Training hyperparameters
 
22
 
23
  DistilBERT has a maximum input length of 512, so with this in mind the following was performed:
24
 
25
+ 1. I used the `distilbert-base-uncased` pretrained model to initialize an `AutoTokenizer`.
26
  2. Setting a maximum length of 256, each entry in the training, testing and validation data was truncated if it exceeded the limit and padded if it didn't reach the limit.
27
 
28
  ### Training hyperparameters