tjake commited on
Commit
2ecb5a1
1 Parent(s): a035d09

Upload folder using huggingface_hub

Browse files
Files changed (5) hide show
  1. README.md +108 -0
  2. config.json +29 -0
  3. model.safetensors +3 -0
  4. tokenizer.json +0 -0
  5. tokenizer_config.json +61 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ base_model: 01-ai/Yi-Coder-1.5B
5
+ ---
6
+
7
+ # Quantized Version of 01-ai/Yi-Coder-1.5B-Chat
8
+
9
+ This model is a quantized variant of the 01-ai/Yi-Coder-1.5B-Chat model, optimized for use with Jlama, a Java-based inference engine. The quantization process reduces the model's size and improves inference speed, while maintaining high accuracy for efficient deployment in production environments.
10
+
11
+ For more information on Jlama, visit the [Jlama GitHub repository](https://github.com/tjake/jlama).
12
+
13
+ ---
14
+
15
+
16
+ <div align="center">
17
+
18
+ <picture>
19
+ <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="120px">
20
+ </picture>
21
+
22
+ </div>
23
+
24
+ <p align="center">
25
+ <a href="https://github.com/01-ai">🐙 GitHub</a> •
26
+ <a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
27
+ <a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
28
+ <a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
29
+ <br/>
30
+ <a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
31
+ <a href="https://01-ai.github.io/">💪 Tech Blog</a> •
32
+ <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
33
+ <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
34
+ </p>
35
+
36
+ # Intro
37
+
38
+ Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters.
39
+
40
+ Key features:
41
+ - Excelling in long-context understanding with a maximum context length of 128K tokens.
42
+ - Supporting 52 major programming languages:
43
+ ```bash
44
+ 'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
45
+ ```
46
+
47
+ For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/) and [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
48
+
49
+ <p align="left">
50
+ <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/yi-coder-calculator-demo.gif?raw=true" alt="demo1" width="500"/>
51
+ </p>
52
+
53
+ # Models
54
+
55
+ | Name | Type | Length | Download |
56
+ |--------------------|------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
57
+ | Yi-Coder-9B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) |
58
+ | Yi-Coder-1.5B-Chat | Chat | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B-Chat) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) |
59
+ | Yi-Coder-9B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-9B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B) |
60
+ | Yi-Coder-1.5B | Base | 128K | [🤗 Hugging Face](https://huggingface.co/01-ai/Yi-Coder-1.5B) • [🤖 ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) • [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) |
61
+ | |
62
+
63
+ # Benchmarks
64
+
65
+ As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.
66
+
67
+ <p align="left">
68
+ <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/bench1.webp?raw=true" alt="bench1" width="1000"/>
69
+ </p>
70
+
71
+ # Quick Start
72
+
73
+ You can use transformers to run inference with Yi-Coder models (both chat and base versions) as follows:
74
+ ```python
75
+ from transformers import AutoTokenizer, AutoModelForCausalLM
76
+
77
+ device = "cuda" # the device to load the model onto
78
+ model_path = "01-ai/Yi-Coder-9B-Chat"
79
+
80
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
81
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval()
82
+
83
+ prompt = "Write a quick sort algorithm."
84
+ messages = [
85
+ {"role": "system", "content": "You are a helpful assistant."},
86
+ {"role": "user", "content": prompt}
87
+ ]
88
+ text = tokenizer.apply_chat_template(
89
+ messages,
90
+ tokenize=False,
91
+ add_generation_prompt=True
92
+ )
93
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
94
+
95
+ generated_ids = model.generate(
96
+ model_inputs.input_ids,
97
+ max_new_tokens=1024,
98
+ eos_token_id=tokenizer.eos_token_id
99
+ )
100
+ generated_ids = [
101
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
102
+ ]
103
+
104
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
105
+ print(response)
106
+ ```
107
+
108
+ For getting up and running with Yi-Coder series models quickly, see [Yi-Coder README](https://github.com/01-ai/Yi-Coder).
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "LlamaForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 7,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 2048,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 5504,
13
+ "max_position_embeddings": 131072,
14
+ "mlp_bias": false,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "num_key_value_heads": 16,
19
+ "pad_token_id": 0,
20
+ "pretraining_tp": 1,
21
+ "rms_norm_eps": 1e-05,
22
+ "rope_scaling": null,
23
+ "rope_theta": 10000000,
24
+ "tie_word_embeddings": false,
25
+ "torch_dtype": "bfloat16",
26
+ "transformers_version": "4.44.0",
27
+ "use_cache": false,
28
+ "vocab_size": 64000
29
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:887af2d03b01b43f5060dffb6df71ec67dc42258a1a49e866f19b437d2c7c20e
3
+ size 922991464
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<unk>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|startoftext|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ },
22
+ "2": {
23
+ "content": "<|endoftext|>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false,
28
+ "special": true
29
+ },
30
+ "6": {
31
+ "content": "<|im_start|>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false,
36
+ "special": true
37
+ },
38
+ "7": {
39
+ "content": "<|im_end|>",
40
+ "lstrip": false,
41
+ "normalized": false,
42
+ "rstrip": false,
43
+ "single_word": false,
44
+ "special": true
45
+ }
46
+ },
47
+ "bos_token": "<|startoftext|>",
48
+ "chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% if system_message is defined %}{{ '<|im_start|>system\n' + system_message + '<|im_end|>\n' }}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|im_start|>user\n' + content + '<|im_end|>\n<|im_start|>assistant\n' }}{% elif message['role'] == 'assistant' %}{{ content + '<|im_end|>' + '\n' }}{% endif %}{% endfor %}",
49
+ "clean_up_tokenization_spaces": false,
50
+ "eos_token": "<|im_end|>",
51
+ "legacy": true,
52
+ "model_max_length": 200000,
53
+ "pad_token": "<unk>",
54
+ "padding_side": "right",
55
+ "sp_model_kwargs": {},
56
+ "spaces_between_special_tokens": false,
57
+ "split_special_tokens": false,
58
+ "tokenizer_class": "LlamaTokenizer",
59
+ "unk_token": "<unk>",
60
+ "use_default_system_prompt": false
61
+ }