runtime error

Exit code: 1. Reason: ng shards: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 4/5 [00:41<00:10, 10.31s/it] model-00004-of-00005.safetensors: 0%| | 0.00/2.48G [00:00<?, ?B/s] model-00004-of-00005.safetensors: 4%|▍ | 94.4M/2.48G [00:01<00:26, 90.1MB/s] model-00004-of-00005.safetensors: 13%|β–ˆβ–Ž | 315M/2.48G [00:02<00:13, 165MB/s]  model-00004-of-00005.safetensors: 31%|β–ˆβ–ˆβ–ˆ | 776M/2.48G [00:03<00:05, 294MB/s] model-00004-of-00005.safetensors: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1.41G/2.48G [00:04<00:02, 423MB/s] model-00004-of-00005.safetensors: 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 1.98G/2.48G [00:05<00:01, 478MB/s] model-00004-of-00005.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.48G/2.48G [00:05<00:00, 433MB/s] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:47<00:00, 8.75s/it] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:47<00:00, 9.45s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 37, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3826, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1556, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1667, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.

Container logs:

Fetching error logs...