Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ZeroXClem
/
Mistral-2.5-Prima-Hercules-Fusion-7B-Q8_0-GGUF
like
0
Transformers
GGUF
English
Merge
mergekit
lazymergekit
hydra-project/ChatHercules-2.5-Mistral-7B
Nitral-Archive/Prima-Pastacles-7b
llama-cpp
gguf-my-repo
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
ee754d1
Mistral-2.5-Prima-Hercules-Fusion-7B-Q8_0-GGUF
1 contributor
History:
2 commits
ZeroXClem
Upload mistral-2.5-prima-hercules-fusion-7b-q8_0.gguf with huggingface_hub
ee754d1
verified
3 months ago
.gitattributes
1.6 kB
Upload mistral-2.5-prima-hercules-fusion-7b-q8_0.gguf with huggingface_hub
3 months ago
mistral-2.5-prima-hercules-fusion-7b-q8_0.gguf
7.7 GB
LFS
Upload mistral-2.5-prima-hercules-fusion-7b-q8_0.gguf with huggingface_hub
3 months ago