TigerBot

A cutting-edge foundation for your very own LLM.

๐ŸŒ TigerBot โ€ข ๐Ÿค— Hugging Face

This is a 4-bit GPTQ version of the Tigerbot 7B sft.

It was quantized to 4bit using: https://github.com/TigerResearch/TigerBot/tree/main/gptq

How to download and use this model in github: https://github.com/TigerResearch/TigerBot

Here are commands to clone the TigerBot and install.

conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt

Inference with command line interface

cd TigerBot/gptq
CUDA_VISIBLE_DEVICES=0 python tigerbot_infer.py TigerResearch/tigerbot-7b-sft-4bit-128g --wbits 4 --groupsize 128 --load TigerResearch/tigerbot-7b-sft-4bit-128g/tigerbot-7b-4bit-128g.pt
Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.