wuxiaojun commited on
Commit
ed0c951
·
verified ·
1 Parent(s): 8377339

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,3 +1,112 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ base_model:
7
+ - Qwen/Qwen2-7B-Instruct
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ tags:
11
+ - finance
12
+ - text-generation-inference
13
+ ---
14
+
15
+ <!-- markdownlint-disable first-line-h1 -->
16
+ <!-- markdownlint-disable html -->
17
+ <!-- markdownlint-disable no-duplicate-header -->
18
+
19
+ <div align="center">
20
+ <img src="https://github.com/IDEA-FinAI/Golden-Touchstone/blob/main/Touchstone-GPT-logo.png?raw=true" width="7%" alt="Golden-Touchstone" />
21
+ <h1 style="display: inline-block; vertical-align: middle; margin-left: 10px; font-size: 2em; font-weight: bold;">Golden-Touchstone Benchmark</h1>
22
+ </div>
23
+
24
+ <div align="center" style="line-height: 1;">
25
+ <a href="https://arxiv.org/abs/2311.03301" target="_blank" style="margin: 2px;">
26
+ <img alt="arXiv" src="https://img.shields.io/badge/Arxiv-2311.03301-b31b1b.svg?logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
27
+ </a>
28
+ <a href="https://github.com/IDEA-FinAI/Golden-Touchstone" target="_blank" style="margin: 2px;">
29
+ <img alt="github" src="https://img.shields.io/github/stars/IDEA-FinAI/Golden-Touchstone.svg?style=social" style="display: inline-block; vertical-align: middle;"/>
30
+ </a>
31
+ <a href="https://huggingface.co/IDEA-FinAI/TouchstoneGPT-7B-Instruct" target="_blank" style="margin: 2px;">
32
+ <img alt="datasets" src="https://img.shields.io/badge/🤗-Datasets-yellow.svg" style="display: inline-block; vertical-align: middle;"/>
33
+ </a>
34
+ <a href="https://huggingface.co/IDEA-FinAI/TouchstoneGPT-7B-Instruct" target="_blank" style="margin: 2px;">
35
+ <img alt="huggingface" src="https://img.shields.io/badge/🤗-Model-yellow.svg" style="display: inline-block; vertical-align: middle;"/>
36
+ </a>
37
+ </div>
38
+
39
+ # Golden-Touchstone
40
+
41
+
42
+ Golden Touchstone is a simple, effective, and systematic benchmark for bilingual (Chinese-English) financial large language models, driving the research and implementation of financial large language models, akin to a touchstone. We also have trained and open-sourced Touchstone-GPT as a baseline for subsequent community research.
43
+
44
+
45
+ ## Introduction
46
+
47
+ The paper shows the evaluation of the diversity, systematicness and LLM adaptability of each open source benchmark.
48
+
49
+ ![benchmark_info](https://github.com/IDEA-FinAI/Golden-Touchstone/blob/main/benchmark_info.png?raw=true)
50
+
51
+ By collecting and selecting representative task datasets, we built our own Chinese-English bilingual Touchstone Benchmark, which includes 22 datasets
52
+
53
+ ![golden_touchstone_info](https://github.com/IDEA-FinAI/Golden-Touchstone/blob/main/golden_touchstone_info.png?raw=true)
54
+
55
+ We extensively evaluated GPT-4o, llama3, qwen2, fingpt and our own trained Touchstone-GPT, analyzed the advantages and disadvantages of these models, and provided direction for subsequent research on financial large language models
56
+
57
+ ![evaluation](https://github.com/IDEA-FinAI/Golden-Touchstone/blob/main/evaluation.png?raw=true)
58
+
59
+ ## Evaluation of Touchstone Benchmark
60
+
61
+ Please See our github repo [Golden-Touchstone](https://github.com/IDEA-FinAI/Golden-Touchstone)
62
+
63
+ ## Usage of Touchstone-GPT
64
+
65
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
66
+
67
+ ```python
68
+ from transformers import AutoModelForCausalLM, AutoTokenizer
69
+ device = "cuda" # the device to load the model onto
70
+
71
+ model = AutoModelForCausalLM.from_pretrained(
72
+ "IDEA-FinAI/TouchstoneGPT-7B-Instruct",
73
+ torch_dtype="auto",
74
+ device_map="auto"
75
+ )
76
+ tokenizer = AutoTokenizer.from_pretrained("IDEA-FinAI/TouchstoneGPT-7B-Instruct")
77
+
78
+ prompt = "What is the sentiment of the following financial post: Positive, Negative, or Neutral?\nsees #Apple at $150/share in a year (+36% from today) on growing services business."
79
+ messages = [
80
+ {"role": "system", "content": "You are a helpful assistant."},
81
+ {"role": "user", "content": prompt}
82
+ ]
83
+ text = tokenizer.apply_chat_template(
84
+ messages,
85
+ tokenize=False,
86
+ add_generation_prompt=True
87
+ )
88
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
89
+
90
+ generated_ids = model.generate(
91
+ model_inputs.input_ids,
92
+ max_new_tokens=512
93
+ )
94
+ generated_ids = [
95
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
96
+ ]
97
+
98
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
99
+ ```
100
+
101
+
102
+
103
+ ## Citation
104
+ ```
105
+ @article{gan2023ziya2,
106
+ title={Ziya2: Data-centric learning is all llms need},
107
+ author={Gan, Ruyi and Wu, Ziwei and Sun, Renliang and Lu, Junyu and Wu, Xiaojun and Zhang, Dixiang and Pan, Kunhao and He, Junqing and Tian, Yuanhe and Yang, Ping and others},
108
+ journal={arXiv preprint arXiv:2311.03301},
109
+ year={2023}
110
+ }
111
+ ```
112
+