linzheng commited on
Commit
7537513
·
verified ·
1 Parent(s): 7e15500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -28
README.md CHANGED
@@ -8,60 +8,71 @@ license: apache-2.0
8
  ## Model Resources
9
 
10
  - **Repository:** https://github.com/openevabyte/evabyte
11
- - **Blog:** https://hkunlp.github.io/blog/2024/evabyte
12
  - **Paper:** Coming soon
13
 
14
  ## Model Details
15
 
16
- EvaByte is trained using the SambaNova SN30 RDU system with a batch size of 8M bytes and 32K context length. The training process consists of 3 phases: after pre-training on 1.2T bytes (yielding **EvaByte-6.5B-Phase1**), two independent annealing runs (100B and 200B bytes respectively) are conducted with learning rate linearly decayed from 1e-4 to 0. The resulting checkpoints are merged via model soup (**EvaByte-6.5B**), which then undergoes supervised fine-tuning (**EvaByte-6.5B-SFT**).
17
 
18
  | Stage | Model |
19
  |:----- |:-----|
20
- | Base (before annealing) | [EvaByte-6.5B-Phase1](https://huggingface.co/evabyte/EvaByte-6.5B-Phase1) <-- you are here |
21
- | Base | [EvaByte-6.5B](https://huggingface.co/evabyte/EvaByte-6.5B) |
22
- | SFT | [EvaByte-6.5B-SFT](https://huggingface.co/evabyte/EvaByte-6.5B-SFT) |
23
 
24
 
25
  ## Usage
26
 
 
 
 
 
27
  ```python
28
  from transformers import AutoTokenizer, AutoModelForCausalLM
29
  import torch
30
 
31
- tokenizer = AutoTokenizer.from_pretrained("evabyte/EvaByte-6.5B-Phase1", trust_remote_code=True)
32
- model = AutoModelForCausalLM.from_pretrained("evabyte/EvaByte-6.5B-Phase1", torch_dtype=torch.bfloat16, trust_remote_code=True).eval().to("cuda")
 
33
 
34
  prompt = "The quick brown fox jumps "
35
 
36
- input_ids = tokenizer(prompt, return_tensors="pt").input_ids
37
- # alternatively, simply use the UTF-8 bytes.
38
- # Note: the tokenizer offsets each byte by 64 and prepends the sentinel <bos>
39
- input_ids = torch.tensor([[1] + list(map(lambda x: x + 64, prompt.encode("utf-8")))])
40
 
41
- input_ids = input_ids.to("cuda")
 
 
42
 
43
  # byte-by-byte generation (default)
44
  generation_output = model.generate(
45
  input_ids=input_ids,
46
  max_new_tokens=32
47
  )
48
- # alternatively, use multibyte generation
49
  generation_output = model.multi_byte_generate(
50
  input_ids=input_ids,
51
  max_new_tokens=32
52
  )
53
 
 
54
  response = tokenizer.decode(
55
  generation_output[0][input_ids.shape[1]:],
56
  skip_special_tokens=False,
57
  clean_up_tokenization_spaces=False
58
  )
59
  print(response)
 
 
60
  ```
61
 
62
- We support two modes of generation:
63
- - `model.generate()`: When invoked, the model will generate one byte at a time. This is the default generation interface in the Huggingface `transformers` library.
64
- - `model.multi_byte_generate()`: the model will generate multiple bytes in a single step, with the implementation adapted from [Medusa](https://github.com/FasterDecoding/Medusa). This will be much faster than above and usually yields the same result under the setting of greedy decoding. `model.multi_byte_generate()` supports a subset of arguments in `model.generate()`:
 
 
65
  - `input_ids`: the input byte ids.
66
  - `temperature`: the temperature for sampling.
67
  - `max_length`: the maximum length of the generated sequence.
@@ -70,27 +81,25 @@ We support two modes of generation:
70
  - `top_p`: the top-p parameter for sampling.
71
  - `do_sample`: greedy decoding or sampling.
72
 
73
- NOTE:
74
- - `device_map="auto"` is not supported for > 2 GPUs
75
- - Decoding only supports batch size of 1 with `attention_mask=None` for now.
76
- - Only supports `torch_dtype=torch.bfloat16` for now.
 
77
 
78
  ## Bias, Risks, and Limitations
79
- As a pretrained base model, **EvaByte-6.5B-Phase1** has not been fine-tuned for chat or instruction following, so users should not expect reliable performance in conversational or instruction-based tasks. Like other base models, it does not incorporate any moderation mechanisms, making it possible to generate potentially harmful or inappropriate content.
80
 
81
  ## Evaluation
82
 
83
- For detailed evaluation results, please refer to the [blog](https://hkunlp.github.io/blog/2024/evabyte).
84
 
85
  ## Citation
86
-
87
- **BibTeX:**
88
-
89
- ```
90
  @misc{evabyte,
91
  title = {EvaByte: Efficient Byte-level Language Models at Scale},
92
- url = {},
93
- author = {Lin Zheng and Xueliang Zhao and Guangtao Wang and Chen Wu and David Dong and Angela Wang and Mingran Wang and Haige Bo and Tony Zhang and Changran Hu and Urmish Thakker and Lingpeng Kong},
94
  year = {2025}
95
  }
96
  ```
 
8
  ## Model Resources
9
 
10
  - **Repository:** https://github.com/openevabyte/evabyte
11
+ - **Blog:** https://hkunlp.github.io/blog/2025/evabyte
12
  - **Paper:** Coming soon
13
 
14
  ## Model Details
15
 
16
+ EvaByte is trained using the performant SambaNova SN30 RDU system with a batch size of 8M bytes and 32K context length. The training process consists of 3 phases: after pre-training on 1.2T bytes (yielding **EvaByte-Phase1**), two independent annealing runs (100B and 200B bytes respectively) are conducted with learning rate linearly decayed from 1e-4 to 0. The resulting checkpoints are merged via model soup (**EvaByte**), which then undergoes supervised fine-tuning (**EvaByte-SFT**).
17
 
18
  | Stage | Model |
19
  |:----- |:-----|
20
+ | Base (before annealing) | [EvaByte-Phase1](https://huggingface.co/evabyte/EvaByte-Phase1) <-- you are here |
21
+ | Base | [EvaByte](https://huggingface.co/evabyte/EvaByte) |
22
+ | SFT | [EvaByte-SFT](https://huggingface.co/evabyte/EvaByte-SFT) |
23
 
24
 
25
  ## Usage
26
 
27
+ **Note:** Make sure to set `trust_remote_code=True` when loading the model (or tokenizer), as our implementation includes custom code.
28
+
29
+ The code snippet below demonstrates EvaByte-6.5B for completion:
30
+
31
  ```python
32
  from transformers import AutoTokenizer, AutoModelForCausalLM
33
  import torch
34
 
35
+ # Load model and tokenizer
36
+ tokenizer = AutoTokenizer.from_pretrained("evabyte/EvaByte-Phase1", trust_remote_code=True)
37
+ model = AutoModelForCausalLM.from_pretrained("evabyte/EvaByte-Phase1", torch_dtype=torch.bfloat16, trust_remote_code=True).eval().to("cuda")
38
 
39
  prompt = "The quick brown fox jumps "
40
 
41
+ # Tokenize input
42
+ # Option 1: standard HF tokenizer interface
43
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
 
44
 
45
+ # Option 2: Direct UTF-8 byte encoding with offset
46
+ # Note: Each byte is offset by 64 with <bos> prepended.
47
+ input_ids = torch.tensor([[1] + [b + 64 for b in prompt.encode("utf-8")]]).to("cuda")
48
 
49
  # byte-by-byte generation (default)
50
  generation_output = model.generate(
51
  input_ids=input_ids,
52
  max_new_tokens=32
53
  )
54
+ # alternatively, use faster multibyte generation
55
  generation_output = model.multi_byte_generate(
56
  input_ids=input_ids,
57
  max_new_tokens=32
58
  )
59
 
60
+ # Decode and print the output
61
  response = tokenizer.decode(
62
  generation_output[0][input_ids.shape[1]:],
63
  skip_special_tokens=False,
64
  clean_up_tokenization_spaces=False
65
  )
66
  print(response)
67
+ # Sample output:
68
+ # over the lazy dog.\n\nThe quick
69
  ```
70
 
71
+ ### ⚙️ Generation Modes
72
+
73
+ EvaByte supports two generation interfaces:
74
+ - `model.generate()`: The default generation method compatible with Huggingface `transformers` library. This approach generates one byte at a time and might be slow.
75
+ - `model.multi_byte_generate()`: A faster alternative that generates multiple bytes per step and usually yields the same result as `model.generate()` under greedy decoding, with the implementation adapted from [Medusa](https://github.com/FasterDecoding/Medusa). `model.multi_byte_generate()` supports a subset of arguments in `model.generate()`:
76
  - `input_ids`: the input byte ids.
77
  - `temperature`: the temperature for sampling.
78
  - `max_length`: the maximum length of the generated sequence.
 
81
  - `top_p`: the top-p parameter for sampling.
82
  - `do_sample`: greedy decoding or sampling.
83
 
84
+ **Notes and Limitations:**
85
+ - `device_map="auto"` is not supported for >2 GPUs.
86
+ - Only batch size of 1 (with `attention_mask=None`) is supported for decoding.
87
+ - `torch_dtype=torch.bfloat16` is required.
88
+ - The multibyte generation `model.multi_byte_generate()` might return extra bytes after the end-of-sequence sentinel, due to the nature of the multibyte decoding. Manual truncation or cleaning may be needed.
89
 
90
  ## Bias, Risks, and Limitations
91
+ As a pretrained base model, **EvaByte-Phase1** has not been fine-tuned for chat or instruction following, so users should not expect reliable performance in conversational or instruction-based tasks. Like other base models, it does not incorporate any moderation mechanisms, making it possible to generate potentially harmful or inappropriate content.
92
 
93
  ## Evaluation
94
 
95
+ For detailed evaluation results, please refer to the [blog post](https://hkunlp.github.io/blog/2025/evabyte).
96
 
97
  ## Citation
98
+ ```bibtex
 
 
 
99
  @misc{evabyte,
100
  title = {EvaByte: Efficient Byte-level Language Models at Scale},
101
+ url = {https://hkunlp.github.io/blog/2025/evabyte},
102
+ author = {Lin Zheng and Xueliang Zhao and Guangtao Wang and Chen Wu and David Dong and Angela Wang and Mingran Wang and Yun Du and Haige Bo and Tony Zhang and Changran Hu and Urmish Thakker and Lingpeng Kong},
103
  year = {2025}
104
  }
105
  ```