Text Generation
Transformers
PyTorch
Chinese
English
llama
text-generation-inference
fireballoon commited on
Commit
b08ad11
1 Parent(s): bc223f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -124,7 +124,7 @@ This algorithm has a runtime complexity of O(log n) and a space complexity of O(
124
  baichuan-vicuna-7b是在vicuna sharegpt数据上全参数微调的对话模型。
125
  - 基座模型是[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),由百川智能开发的可商用大规模预训练模型。
126
  - 微调数据包括[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json),混合一定比例的[COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)和[Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions)数据以提升模型的推理和代码能力(数据混合策略受到[TULU](https://arxiv.org/abs/2306.04751)研究成果的启发)。
127
- - 训练代码在https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py,代码基于[FastChat](https://github.com/lm-sys/FastChat)。
128
 
129
 
130
  # Test examples on Ziyan Eval
 
124
  baichuan-vicuna-7b是在vicuna sharegpt数据上全参数微调的对话模型。
125
  - 基座模型是[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),由百川智能开发的可商用大规模预训练模型。
126
  - 微调数据包括[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json),混合一定比例的[COT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)和[Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions)数据以提升模型的推理和代码能力(数据混合策略受到[TULU](https://arxiv.org/abs/2306.04751)研究成果的启发)。
127
+ - 训练代码在https://huggingface.co/fireballoon/baichuan-vicuna-7b/blob/main/train_vicuna.py, 代码基于[FastChat](https://github.com/lm-sys/FastChat)。
128
 
129
 
130
  # Test examples on Ziyan Eval