README / README.md
FrenzyBiscuit's picture
Update README.md
7e1bf1f verified
metadata
title: Ready.Art
emoji: πŸŒ–
colorFrom: purple
colorTo: yellow
sdk: static
pinned: true

Ready.Art - Current Members:

Sleep Deprived
FrenzyBiscuit
Darkhn
Inasity
MAWNIPULATOR
ToastyPigeon
ArtusDev

Ready.Art - Request Quants!

You can request quants in either our discord or on our community page here on HF.

If you want a custom quant that we don't typically do make sure to list it!

We will get to your request when we get to it. No time guarantees on this FREE service.

NOTE: We almost always do EXL2 quants, but may make an exception for GGUF.

Point of Contact:

Please contact either FrenzyBiscuit, ToastyPigeon or Sleep Deprived if you have questions or problems.

Merge Requests:

We do accept merge requests if they address an issue.

Report Issues!

Please report issues with our quants/models! We don't do extensive testing because the volume of stuff we push out is massive.

Chat Completion Issues?

Chat Completion (not to get confused with text completion) typically is not supported by our EXL2 quants.

This is not something we cause. It's because the model creator didn't design/test their model for chat completion.

You can fix this by replacing the chat_template in the tokenizer_config.json (in some cases it's better to replace all .json files other then the config.json).

You can replace it with any model that works and is the same architecture. tl;dr if your model is based on llama 3.3 find one that works and copy/paste.