runtime error
ogne-2-7b-instruct.ggmlv3.q4_1.bin: 1%| | 52.4M/4.24G [00:07<08:13, 8.49MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 7%|▋ | 283M/4.24G [00:08<01:16, 51.9MB/s] [A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 17%|█▋ | 734M/4.24G [00:09<00:25, 138MB/s] [A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 24%|██▍ | 1.03G/4.24G [00:10<00:18, 171MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 30%|███ | 1.28G/4.24G [00:16<00:31, 92.7MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 41%|████ | 1.73G/4.24G [00:17<00:17, 145MB/s] [A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 47%|████▋ | 2.00G/4.24G [00:18<00:13, 163MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 53%|█████▎ | 2.26G/4.24G [00:23<00:19, 102MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 62%|██████▏ | 2.64G/4.24G [00:24<00:11, 137MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 68%|██████▊ | 2.88G/4.24G [00:25<00:09, 148MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 73%|███████▎ | 3.11G/4.24G [00:26<00:07, 159MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 79%|███████▊ | 3.33G/4.24G [00:33<00:11, 79.7MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 84%|████████▍ | 3.57G/4.24G [00:34<00:06, 96.9MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 90%|████████▉ | 3.81G/4.24G [00:35<00:03, 117MB/s] [A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 99%|█████████▉| 4.21G/4.24G [00:36<00:00, 163MB/s][A vigogne-2-7b-instruct.ggmlv3.q4_1.bin: 100%|█████████▉| 4.24G/4.24G [00:36<00:00, 115MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> llm = llama(model_path= hf_hub_download(repo_id="TheBloke/Vigogne-2-7B-Instruct-GGML", filename="vigogne-2-7b-instruct.ggmlv3.q4_1.bin"), n_ctx=2048) #download model from hf/ n_ctx=2048 for high ccontext length TypeError: 'module' object is not callable
Container logs:
Fetching error logs...