Edit model card

TigerBot

A cutting-edge foundation for your very own LLM.

💻Github • 🌐 TigerBot • 🤗 Hugging Face

快速开始

  • 方法1,通过transformers使用

    • 下载 TigerBot Repo

      git clone https://github.com/TigerResearch/TigerBot.git
      
    • 启动infer代码

      python infer.py --model_path TigerResearch/tigerbot-70b-base-v1 --model_type base
      
  • 方法2:

    • 下载 TigerBot Repo

      git clone https://github.com/TigerResearch/TigerBot.git
      
    • 安装git lfs: git lfs install

    • 通过huggingface或modelscope平台下载权重

      git clone https://huggingface.co/TigerResearch/tigerbot-70b-base-v1
      git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-base-v1.git
      
    • 启动infer代码

      python infer.py --model_path tigerbot-70b-base-v1 --model_type base
      

Quick Start

  • Method 1, use through transformers

    • Clone TigerBot Repo

      git clone https://github.com/TigerResearch/TigerBot.git
      
    • Run infer script

      python infer.py --model_path TigerResearch/tigerbot-70b-base-v1 --model_type base
      
  • Method 2:

    • Clone TigerBot Repo

      git clone https://github.com/TigerResearch/TigerBot.git
      
    • install git lfs: git lfs install

    • Download weights from huggingface or modelscope

      git clone https://huggingface.co/TigerResearch/tigerbot-70b-base-v1
      git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-base-v1.git
      
    • Run infer script

      python infer.py --model_path tigerbot-70b-base-v1 --model_type base
      

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 62.1
ARC (25-shot) 62.46
HellaSwag (10-shot) 83.61
MMLU (5-shot) 65.49
TruthfulQA (0-shot) 52.76
Winogrande (5-shot) 80.19
GSM8K (5-shot) 37.76
DROP (3-shot) 52.45
Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.