text
stringlengths 0
759
|
---|
File "/tmp/.cache/uv/environments-v2/f7387fa1610e269a/lib/python3.13/site-packages/transformers/pipelines/base.py", line 332, in infer_framework_load_model
|
raise ValueError(
|
f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
|
)
|
ValueError: Could not load model skt/A.X-4.0-VL-Light with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors:
|
while loading with AutoModelForImageTextToText, an error is thrown:
|
Traceback (most recent call last):
|
File "/tmp/.cache/uv/environments-v2/f7387fa1610e269a/lib/python3.13/site-packages/transformers/pipelines/base.py", line 292, in infer_framework_load_model
|
model = model_class.from_pretrained(model, **kwargs)
|
File "/tmp/.cache/uv/environments-v2/f7387fa1610e269a/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 603, in from_pretrained
|
raise ValueError(
|
...<2 lines>...
|
)
|
ValueError: Unrecognized configuration class <class 'transformers_modules.skt.A.X-4.0-VL-Light.98fd0c5d90cb38ff8b91493d5b3e86334d55a533.configuration_ax4vl.AX4VLConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
|
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
|
During handling of the above exception, another exception occurred:
|
Traceback (most recent call last):
|
File "/tmp/.cache/uv/environments-v2/f7387fa1610e269a/lib/python3.13/site-packages/transformers/pipelines/base.py", line 310, in infer_framework_load_model
|
model = model_class.from_pretrained(model, **fp32_kwargs)
|
File "/tmp/.cache/uv/environments-v2/f7387fa1610e269a/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 603, in from_pretrained
|
raise ValueError(
|
...<2 lines>...
|
)
|
ValueError: Unrecognized configuration class <class 'transformers_modules.skt.A.X-4.0-VL-Light.98fd0c5d90cb38ff8b91493d5b3e86334d55a533.configuration_ax4vl.AX4VLConfig'> for this kind of AutoModel: AutoModelForImageTextToText.
|
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, InternVLConfig, JanusConfig, Kosmos2Config, Llama4Config, LlavaConfig, LlavaNextConfig, LlavaNextVideoConfig, LlavaOnevisionConfig, Mistral3Config, MllamaConfig, PaliGemmaConfig, PerceptionLMConfig, Pix2StructConfig, PixtralVisionConfig, Qwen2_5_VLConfig, Qwen2VLConfig, ShieldGemma2Config, SmolVLMConfig, UdopConfig, VipLlavaConfig, VisionEncoderDecoderConfig.
|
Everything was good in skt_A.X-4.0-VL-Light_1.txt
|
No suitable GPU found for stepfun-ai/NextStep-1-Large-Edit | 72.42 GB VRAM requirement
|
No suitable GPU found for stepfun-ai/NextStep-1-Large-Edit | 72.42 GB VRAM requirement
|
No suitable GPU found for stepfun-ai/NextStep-1-Large | 72.44 GB VRAM requirement
|
Traceback (most recent call last):
|
File "/tmp/stepfun-ai_Step-Audio-2-mini_0YdM1zC.py", line 12, in <module>
|
model = AutoModelForCausalLM.from_pretrained("stepfun-ai/Step-Audio-2-mini", trust_remote_code=True, torch_dtype="auto")
|
File "/tmp/.cache/uv/environments-v2/fe0e55187662fc3a/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 586, in from_pretrained
|
model_class = get_class_from_dynamic_module(
|
class_ref, pretrained_model_name_or_path, code_revision=code_revision, **hub_kwargs, **kwargs
|
)
|
File "/tmp/.cache/uv/environments-v2/fe0e55187662fc3a/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 569, in get_class_from_dynamic_module
|
final_module = get_cached_module_file(
|
repo_id,
|
...<8 lines>...
|
repo_type=repo_type,
|
)
|
File "/tmp/.cache/uv/environments-v2/fe0e55187662fc3a/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 392, in get_cached_module_file
|
modules_needed = check_imports(resolved_module_file)
|
File "/tmp/.cache/uv/environments-v2/fe0e55187662fc3a/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 224, in check_imports
|
raise ImportError(
|
...<2 lines>...
|
)
|
ImportError: This modeling file requires the following packages that were not found in your environment: librosa, torchaudio. Run `pip install librosa torchaudio`
|
No suitable GPU found for stepfun-ai/step3 | 777.21 GB VRAM requirement
|
No suitable GPU found for stepfun-ai/step3 | 777.21 GB VRAM requirement
|
Traceback (most recent call last):
|
File "/tmp/.cache/uv/environments-v2/2a6a6f0875dd3018/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1271, in from_pretrained
|
config_class = CONFIG_MAPPING[config_dict["model_type"]]
|
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/2a6a6f0875dd3018/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 966, in __getitem__
|
raise KeyError(key)
|
KeyError: 'hunyuan_v1_dense'
|
During handling of the above exception, another exception occurred:
|
Traceback (most recent call last):
|
File "/tmp/tencent_Hunyuan-0.5B-Instruct_039Ya9m.py", line 19, in <module>
|
pipe = pipeline("text-generation", model="tencent/Hunyuan-0.5B-Instruct")
|
File "/tmp/.cache/uv/environments-v2/2a6a6f0875dd3018/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 909, in pipeline
|
config = AutoConfig.from_pretrained(
|
model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
|
)
|
File "/tmp/.cache/uv/environments-v2/2a6a6f0875dd3018/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1273, in from_pretrained
|
raise ValueError(
|
...<8 lines>...
|
)
|
ValueError: The checkpoint you are trying to load has model type `hunyuan_v1_dense` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
|
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
|
Traceback (most recent call last):
|
File "/tmp/.cache/uv/environments-v2/89a254f735f44374/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1271, in from_pretrained
|
config_class = CONFIG_MAPPING[config_dict["model_type"]]
|
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/89a254f735f44374/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 966, in __getitem__
|
raise KeyError(key)
|
KeyError: 'hunyuan_v1_dense'
|
During handling of the above exception, another exception occurred:
|
Traceback (most recent call last):
|
File "/tmp/tencent_Hunyuan-1.8B-Instruct_0B3jGno.py", line 19, in <module>
|
pipe = pipeline("text-generation", model="tencent/Hunyuan-1.8B-Instruct")
|
File "/tmp/.cache/uv/environments-v2/89a254f735f44374/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 909, in pipeline
|
config = AutoConfig.from_pretrained(
|
model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
|
)
|
File "/tmp/.cache/uv/environments-v2/89a254f735f44374/lib/python3.13/site-packages/transformers/models/auto/configuration_auto.py", line 1273, in from_pretrained
|
raise ValueError(
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.