Update ReadMe.md
Browse files
README.md
CHANGED
@@ -9,9 +9,14 @@ tags:
|
|
9 |
---
|
10 |
[WIP]
|
11 |
|
|
|
|
|
|
|
12 |
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
|
13 |
|
14 |
-
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models
|
|
|
|
|
15 |
|
16 |
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
|
17 |
|
@@ -22,19 +27,3 @@ PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml
|
|
22 |
CURRENT MMLU: 50.36
|
23 |
|
24 |
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
|
25 |
-
|
26 |
-
[在制品]
|
27 |
-
|
28 |
-
这是 [通义千问 Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) 的 LLaMA 化版本,经过重新校准以适应原始的类似 LLaMA/LLaMA-2 的模型结构。
|
29 |
-
|
30 |
-
您可以使用 LlamaCausalLM 进行模型推理,和 LLaMA/LLaMA-2 保持一致(分词器保持不变,因此加载时仍然需要允许外部代码,例如:`AutoTokenizer.from_pretrained(llama_model_path, use_fast=False, trust_remote_code=True)`)。
|
31 |
-
|
32 |
-
模型已经被编辑实现白标化,不再自称通义千问。
|
33 |
-
|
34 |
-
剧透: 进一步的微调正在进行中,当前版本是一个正在进行的工作,一些知识可能由于结构变化而产生偏见和幻觉。 会更新,很快,非常非常非常快。
|
35 |
-
|
36 |
-
PROMPT 格式: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
37 |
-
|
38 |
-
当前的 MMLU: 50.36
|
39 |
-
|
40 |
-
问题:相比原本的Qwen-Chat的53.9,由于不够充分的重新对齐,MMLU分数略有下降(-3.54)。
|
|
|
9 |
---
|
10 |
[WIP]
|
11 |
|
12 |
+
Origin repository [JosephusCheung/Qwen-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat).
|
13 |
+
|
14 |
+
|
15 |
This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure.
|
16 |
|
17 |
+
You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models.
|
18 |
+
|
19 |
+
I converted the tokenizer from tiktoken format to huggingface format, so you do not need to allow external codes when loading anymore.
|
20 |
|
21 |
The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen.
|
22 |
|
|
|
27 |
CURRENT MMLU: 50.36
|
28 |
|
29 |
Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|