runtime error
Exit code: 1. Reason: The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] config.json: 0%| | 0.00/843 [00:00<?, ?B/s][A config.json: 100%|██████████| 843/843 [00:00<00:00, 6.03MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 11, in <module> prompt_enhancer = pipeline( File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 805, in pipeline config = AutoConfig.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 989, in from_pretrained return config_class.from_dict(config_dict, **unused_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/configuration_utils.py", line 772, in from_dict config = cls(**config_dict) File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 161, in __init__ self._rope_scaling_validation() File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 182, in _rope_scaling_validation raise ValueError( ValueError: `rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 32.0, 'high_freq_factor': 4.0, 'low_freq_factor': 1.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}
Container logs:
Fetching error logs...