ganchengguang commited on
Commit
f0ccf5e
1 Parent(s): f443c59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -3
README.md CHANGED
@@ -12,10 +12,14 @@ tags:
12
  ---
13
  This is a model of paper. Base in LLaMA3-8B-Instruction. Meta https://huggingface.co/meta-llama/Meta-Llama-3-8B
14
 
15
- Please must use following format to use OIELLM. And extraction information from input text or sentence.
 
 
16
 
17
  The OIELLM support 3 languages (English, Chinese and Japanese). And you must use task instruct words to define kind of task.
18
 
 
 
19
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/629311a945f405d06678224b/9zG-y_-YbPx-z2woGCx-8.png)
20
 
21
  The following is input and output format:
@@ -25,6 +29,21 @@ The following is input and output format:
25
  }
26
 
27
 
28
- The from_pretrain class is use AutoTokenizer and AutoModelForCausalLM.
 
 
 
 
 
 
 
29
 
30
- If you have any question. You can leave the words in this commutiy. Or contact me from paper's E-mail directly.
 
 
 
 
 
 
 
 
 
12
  ---
13
  This is a model of paper. Base in LLaMA3-8B-Instruction. Meta https://huggingface.co/meta-llama/Meta-Llama-3-8B
14
 
15
+
16
+ ***Please must use following format to use OIELLM.*** And extraction information from input text or sentence.
17
+
18
 
19
  The OIELLM support 3 languages (English, Chinese and Japanese). And you must use task instruct words to define kind of task.
20
 
21
+
22
+
23
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/629311a945f405d06678224b/9zG-y_-YbPx-z2woGCx-8.png)
24
 
25
  The following is input and output format:
 
29
  }
30
 
31
 
32
+ The from_pretrain class is use **AutoTokenizer and AutoModelForCausalLM.**
33
+
34
+ **If you have any question. You can leave the words in this commutiy. Or contact me from paper's E-mail directly.**
35
+
36
+
37
+
38
+
39
+ Paper address and cite information: https://arxiv.org/abs/2407.10953
40
 
41
+ @misc{gan2024mmmmultilingualmutualreinforcement,
42
+ title={MMM: Multilingual Mutual Reinforcement Effect Mix Datasets & Test with Open-domain Information Extraction Large Language Models},
43
+ author={Chengguang Gan and Qingyu Yin and Xinyang He and Hanjun Wei and Yunhao Liang and Younghun Lim and Shijian Wang and Hexiang Huang and Qinghao Zhang and Shiwen Ni and Tatsunori Mori},
44
+ year={2024},
45
+ eprint={2407.10953},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CL},
48
+ url={https://arxiv.org/abs/2407.10953},
49
+ }