runtime error
Exit code: 1. Reason: �█| 483/483 [00:00<00:00, 3.13MB/s] vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s][A vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 68.9MB/s] tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 144MB/s] Map: 0%| | 0/5 [00:00<?, ? examples/s][A Map: 100%|██████████| 5/5 [00:00<00:00, 171.98 examples/s] model.safetensors: 0%| | 0.00/268M [00:00<?, ?B/s][A model.safetensors: 100%|█████████▉| 268M/268M [00:00<00:00, 297MB/s] Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight', 'pre_classifier.bias', 'pre_classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /usr/local/lib/python3.10/site-packages/transformers/training_args.py:1611: FutureWarning: `evaluation_strategy` is deprecated and will be removed in version 4.46 of 🤗 Transformers. Use `eval_strategy` instead warnings.warn( Traceback (most recent call last): File "/home/user/app/app.py", line 39, in <module> training_args = TrainingArguments( File "<string>", line 135, in __init__ File "/usr/local/lib/python3.10/site-packages/transformers/training_args.py", line 1808, in __post_init__ self.device File "/usr/local/lib/python3.10/site-packages/transformers/training_args.py", line 2344, in device return self._setup_devices File "/usr/local/lib/python3.10/site-packages/transformers/utils/generic.py", line 62, in __get__ cached = self.fget(obj) File "/usr/local/lib/python3.10/site-packages/transformers/training_args.py", line 2214, in _setup_devices raise ImportError( ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.26.0`: Please run `pip install transformers[torch]` or `pip install 'accelerate>=0.26.0'`
Container logs:
Fetching error logs...