TobDeBer/arco-Q4_K_M-GGUF

This model was converted to Big Endian Q4_K_M GGUF format from appvoid/arco using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Container Repository for CPU adaptations of Inference code

Variants for Inference

Slim container

  • run std binaries

CPUdiffusion

  • inference diffusion models on CPU
  • include CUDAonCPU stack

Diffusion container

  • run diffusion app.py variants
  • support CPU and CUDA
  • include Flux

Slim CUDA container

  • run CUDA binaries

Variants for Build

Llama.cpp build container

  • build llama-cli-static
  • build llama-server-static

sd build container

  • build sd
  • optional: build sd-server

CUDA build container

  • build cuda binaries
  • support sd_cuda
Downloads last month
21
GGUF
Model size
514M params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TobDeBer/myContainers

Base model

appvoid/arco
Quantized
(7)
this model