Upload folder using huggingface_hub
Browse files- README.md +106 -0
- chat.py +819 -0
- chat_full.py +854 -0
- llama_FFN_PF_lut4_chunk_01of02.mlmodelc.zip +3 -0
- llama_FFN_PF_lut4_chunk_02of02.mlmodelc.zip +3 -0
- llama_embeddings.mlmodelc.zip +3 -0
- llama_lm_head_lut4.mlmodelc.zip +3 -0
- meta.yaml +20 -0
- tokenizer.json +0 -0
- tokenizer_config.json +2062 -0
README.md
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
tags:
|
4 |
+
- coreml
|
5 |
+
- ANE
|
6 |
+
- DeepSeek
|
7 |
+
- Apple
|
8 |
+
- Apple Neural Engine
|
9 |
+
---
|
10 |
+
# ANEMLL
|
11 |
+
|
12 |
+
**ANEMLL** (pronounced like "animal") is an open-source project focused on accelerating the porting of Large Language Models (LLMs) to tensor processors, starting with the Apple Neural Engine (ANE).
|
13 |
+
|
14 |
+
The goal is to provide a fully open-source pipeline from model conversion to inference for common LLM architectures running on ANE.
|
15 |
+
|
16 |
+
This enables seamless integration and on-device inference for low-power applications on edge devices, ensuring maximum privacy and security.
|
17 |
+
|
18 |
+
This is critical for autonomous applications, where models run directly on the device without requiring an internet connection.
|
19 |
+
|
20 |
+
---
|
21 |
+
|
22 |
+
## License
|
23 |
+
|
24 |
+
ANEMLL is licensed under the [MIT License](https://opensource.org/license/mit).
|
25 |
+
The model is based on Meta's LLaMA 3.2 and may require a separate license.
|
26 |
+
|
27 |
+
This test model is exclusively for the Meta's LLaMA 3.2 1B (1024 context) model converted for CoreML, released before the official launch of the ANEMLL repository and minimal documentation. It is intended for early adopters only who requested an early release.
|
28 |
+
|
29 |
+
---
|
30 |
+
|
31 |
+
## Requirements
|
32 |
+
|
33 |
+
- **macOS Sequoia** with Apple Neural Engine and 16GB RAM
|
34 |
+
- **CoreML Tools** and **HuggingFace Transformers** libraries
|
35 |
+
- **Python 3.9**
|
36 |
+
|
37 |
+
`chat.py` provides a sample inference script.
|
38 |
+
`chat_full.py` provides a sample inference script with history and conversation management.
|
39 |
+
|
40 |
+
**Installation**
|
41 |
+
|
42 |
+
1. Download the model from Hugging Face:
|
43 |
+
```bash
|
44 |
+
# Install required tools
|
45 |
+
pip install huggingface_hub
|
46 |
+
|
47 |
+
# Install Git LFS (Large File Support)
|
48 |
+
# macOS with Homebrew:
|
49 |
+
brew install git-lfs
|
50 |
+
# Or Ubuntu/Debian:
|
51 |
+
# sudo apt-get install git-lfs
|
52 |
+
|
53 |
+
# Initialize Git LFS
|
54 |
+
git lfs install
|
55 |
+
|
56 |
+
# Clone the repository with model files
|
57 |
+
git clone https://huggingface.co/anemll/anemll-Meta-Llama-3.2-3B-ctx512_0.1.1
|
58 |
+
```
|
59 |
+
|
60 |
+
2. Extract model files:
|
61 |
+
```bash
|
62 |
+
# Navigate to cloned directory
|
63 |
+
cd anemll-Meta-Llama-3.2-3B-ctx512_0.1.1
|
64 |
+
|
65 |
+
# Pull LFS files (model weights)
|
66 |
+
git lfs pull
|
67 |
+
|
68 |
+
# Extract CoreML model files
|
69 |
+
find . -type f -name "*.zip" -exec unzip {} \;
|
70 |
+
```
|
71 |
+
|
72 |
+
3. Install dependencies:
|
73 |
+
```bash
|
74 |
+
pip install coremltools transformers
|
75 |
+
```
|
76 |
+
|
77 |
+
**Coremltools:**
|
78 |
+
|
79 |
+
See coremltools installation guide at https://coremltools.readme.io/v4.0/docs/installation
|
80 |
+
|
81 |
+
**How to Run**
|
82 |
+
|
83 |
+
1. Basic chat interface:
|
84 |
+
```bash
|
85 |
+
python chat.py --meta ./meta.yaml
|
86 |
+
```
|
87 |
+
|
88 |
+
2. Full conversation mode with history:
|
89 |
+
```bash
|
90 |
+
python chat_full.py --meta ./meta.yaml
|
91 |
+
```
|
92 |
+
|
93 |
+
> Note: The first time the model loads, macOS will take some time to place it on the device.
|
94 |
+
> Subsequent loads will be instantaneous.
|
95 |
+
> Use Ctrl-D to exit, Ctrl-C to interrupt inference.
|
96 |
+
|
97 |
+
**More Info**
|
98 |
+
Please check following links for later updates:
|
99 |
+
|
100 |
+
* [GitHub](https://github.com/anemll)
|
101 |
+
* [Hugging Face Models](https://huggingface.co/anemll)
|
102 |
+
* [Twitter/X](https://x.com/anemll)
|
103 |
+
* [Website](https://anemll.com)
|
104 |
+
|
105 |
+
|
106 |
chat.py
ADDED
@@ -0,0 +1,819 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# chat.py
|
2 |
+
#!/usr/bin/env python3
|
3 |
+
# chat.py
|
4 |
+
# Copyright (c) 2025 Anemll
|
5 |
+
# Licensed under the MIT License
|
6 |
+
|
7 |
+
import argparse
|
8 |
+
import os
|
9 |
+
import re
|
10 |
+
import glob
|
11 |
+
from pathlib import Path
|
12 |
+
import coremltools as ct
|
13 |
+
from transformers import LlamaTokenizer, AutoTokenizer
|
14 |
+
import torch
|
15 |
+
import torch.nn.functional as F
|
16 |
+
import numpy as np
|
17 |
+
import queue
|
18 |
+
import threading
|
19 |
+
import time
|
20 |
+
import yaml
|
21 |
+
import sys
|
22 |
+
|
23 |
+
# ANSI color codes
|
24 |
+
LIGHT_BLUE = "\033[94m"
|
25 |
+
DARK_BLUE = "\033[34m"
|
26 |
+
LIGHT_GREEN = "\033[92m"
|
27 |
+
RESET_COLOR = "\033[0m"
|
28 |
+
|
29 |
+
# Add at top with other constants
|
30 |
+
WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
|
31 |
+
|
32 |
+
class TokenPrinter:
|
33 |
+
"""Handles background printing of generated tokens."""
|
34 |
+
def __init__(self, tokenizer):
|
35 |
+
self.tokenizer = tokenizer
|
36 |
+
self.token_queue = queue.Queue()
|
37 |
+
self.stop_event = threading.Event()
|
38 |
+
self.thread = None
|
39 |
+
self.buffer = ""
|
40 |
+
self.lock = threading.Lock()
|
41 |
+
self.thinking = True # Track if we're still in thinking mode
|
42 |
+
self.decoding_buffer = [] # Buffer for token IDs
|
43 |
+
# Add token counting and timing
|
44 |
+
self.start_time = time.time()
|
45 |
+
self.token_count = 0
|
46 |
+
self.start()
|
47 |
+
|
48 |
+
def start(self):
|
49 |
+
"""Start the printer thread."""
|
50 |
+
if self.thread is None:
|
51 |
+
self.thread = threading.Thread(target=self._print_worker)
|
52 |
+
self.thread.daemon = True
|
53 |
+
self.thread.start()
|
54 |
+
|
55 |
+
def add_token(self, token_id):
|
56 |
+
"""Add a token to the print queue."""
|
57 |
+
if not self.stop_event.is_set():
|
58 |
+
self.token_queue.put(token_id)
|
59 |
+
self.token_count += 1
|
60 |
+
|
61 |
+
def drain_buffer(self):
|
62 |
+
"""Decode token IDs from decoding_buffer in the main thread."""
|
63 |
+
if not self.decoding_buffer:
|
64 |
+
return
|
65 |
+
|
66 |
+
# Decode all tokens at once in the main thread
|
67 |
+
token_str = self.tokenizer.decode(self.decoding_buffer)
|
68 |
+
self.decoding_buffer.clear()
|
69 |
+
|
70 |
+
# Color-handling logic
|
71 |
+
if self.thinking and "</think>" in token_str:
|
72 |
+
self.thinking = False
|
73 |
+
parts = token_str.split("</think>")
|
74 |
+
if len(parts) > 0:
|
75 |
+
print(parts[0] + "</think>", end='', flush=True)
|
76 |
+
if len(parts) > 1:
|
77 |
+
print(LIGHT_BLUE + parts[1], end='', flush=True)
|
78 |
+
else:
|
79 |
+
if not self.thinking:
|
80 |
+
print(LIGHT_BLUE + token_str, end='', flush=True)
|
81 |
+
else:
|
82 |
+
print(token_str, end='', flush=True)
|
83 |
+
|
84 |
+
def _print_worker(self):
|
85 |
+
"""Worker thread that takes token_ids from the queue."""
|
86 |
+
while not self.stop_event.is_set():
|
87 |
+
try:
|
88 |
+
token_id = self.token_queue.get(timeout=0.01)
|
89 |
+
with self.lock:
|
90 |
+
self.decoding_buffer.append(token_id)
|
91 |
+
self.token_queue.task_done()
|
92 |
+
except queue.Empty:
|
93 |
+
continue
|
94 |
+
except Exception as e:
|
95 |
+
print(f"\nError: Token printer error: {str(e)}")
|
96 |
+
break
|
97 |
+
|
98 |
+
def stop(self):
|
99 |
+
"""Stop the printer thread."""
|
100 |
+
if self.thread and self.thread.is_alive():
|
101 |
+
self.stop_event.set()
|
102 |
+
try:
|
103 |
+
self.thread.join(timeout=1.0)
|
104 |
+
except Exception:
|
105 |
+
pass
|
106 |
+
# Calculate and print tokens/s with shorter format in blue
|
107 |
+
elapsed = time.time() - self.start_time
|
108 |
+
if elapsed > 0 and self.token_count > 0:
|
109 |
+
tokens_per_sec = self.token_count / elapsed
|
110 |
+
print(f"\n{DARK_BLUE}{tokens_per_sec:.1f} t/s{RESET_COLOR}")
|
111 |
+
else:
|
112 |
+
print(RESET_COLOR) # Reset color at the end
|
113 |
+
return self.buffer
|
114 |
+
|
115 |
+
def parse_model_path(path):
|
116 |
+
"""Parse model path and return full path with .mlmodelc or .mlpackage extension."""
|
117 |
+
path = Path(path)
|
118 |
+
|
119 |
+
# If path exists exactly as specified, return it
|
120 |
+
if path.exists():
|
121 |
+
return str(path)
|
122 |
+
|
123 |
+
# Try with both extensions
|
124 |
+
candidates = [
|
125 |
+
path, # Original path
|
126 |
+
path.with_suffix('.mlmodelc'), # With .mlmodelc
|
127 |
+
path.with_suffix('.mlpackage'), # With .mlpackage
|
128 |
+
Path(str(path) + '.mlmodelc'), # Handle case where extension is included
|
129 |
+
Path(str(path) + '.mlpackage')
|
130 |
+
]
|
131 |
+
|
132 |
+
# Try all possible paths
|
133 |
+
for candidate in candidates:
|
134 |
+
if candidate.exists():
|
135 |
+
print(f"Found model at: {candidate}")
|
136 |
+
return str(candidate)
|
137 |
+
|
138 |
+
# If we get here, no valid path was found
|
139 |
+
print("\nError: Model not found. Tried following paths:")
|
140 |
+
for candidate in candidates:
|
141 |
+
print(f" {candidate}")
|
142 |
+
raise FileNotFoundError(f"Model not found: {path}")
|
143 |
+
|
144 |
+
def parse_ffn_filename(path):
|
145 |
+
"""Parse FFN model filename to extract chunk information."""
|
146 |
+
path = Path(path)
|
147 |
+
pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
|
148 |
+
match = re.search(pattern, path.name)
|
149 |
+
|
150 |
+
if match:
|
151 |
+
current_chunk = int(match.group(1))
|
152 |
+
total_chunks = int(match.group(2))
|
153 |
+
return current_chunk, total_chunks
|
154 |
+
return None, None
|
155 |
+
|
156 |
+
def find_all_chunks(base_path):
|
157 |
+
"""Find all chunk files matching the base FFN path pattern."""
|
158 |
+
path = Path(base_path)
|
159 |
+
pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
|
160 |
+
return sorted(glob.glob(pattern))
|
161 |
+
|
162 |
+
def load_model(path, function_name=None):
|
163 |
+
"""Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
|
164 |
+
path = Path(path)
|
165 |
+
compute_unit = ct.ComputeUnit.CPU_AND_NE
|
166 |
+
|
167 |
+
try:
|
168 |
+
if path.suffix == '.mlmodelc':
|
169 |
+
# For compiled models (.mlmodelc), use CompiledMLModel
|
170 |
+
if function_name:
|
171 |
+
return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
|
172 |
+
else:
|
173 |
+
return ct.models.CompiledMLModel(str(path), compute_unit)
|
174 |
+
else:
|
175 |
+
# For packages (.mlpackage)
|
176 |
+
if function_name:
|
177 |
+
return ct.models.MLModel(str(path), function_name=function_name)
|
178 |
+
else:
|
179 |
+
return ct.models.MLModel(str(path))
|
180 |
+
|
181 |
+
except RuntimeError as e:
|
182 |
+
if "valid manifest does not exist" in str(e):
|
183 |
+
print(f"\nError: Could not load compiled model at {path}")
|
184 |
+
print("This might be because:")
|
185 |
+
print("1. The model is not properly compiled")
|
186 |
+
print("2. The model was compiled for a different OS version")
|
187 |
+
print("3. The model needs to be recompiled")
|
188 |
+
print("\nTry using the .mlpackage version instead, or recompile the model.")
|
189 |
+
raise
|
190 |
+
|
191 |
+
def load_metadata(model,args):
|
192 |
+
# Extract metadata and config parameters
|
193 |
+
metadata = {}
|
194 |
+
if hasattr(model, 'user_defined_metadata'):
|
195 |
+
meta = model.user_defined_metadata
|
196 |
+
|
197 |
+
# Extract key parameters with defaults
|
198 |
+
metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
|
199 |
+
metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
|
200 |
+
metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
|
201 |
+
metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
|
202 |
+
metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
|
203 |
+
|
204 |
+
print("\nExtracted Parameters:")
|
205 |
+
print(f" Context Length: {metadata['context_length']}")
|
206 |
+
print(f" State Length: {metadata['state_length']}")
|
207 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
208 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
209 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
210 |
+
|
211 |
+
# Print model info
|
212 |
+
print("\nModel Info:")
|
213 |
+
if 'com.anemll.info' in meta:
|
214 |
+
print(f" {meta['com.anemll.info']}")
|
215 |
+
if 'com.github.apple.coremltools.version' in meta:
|
216 |
+
print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
|
217 |
+
|
218 |
+
# Print model input/output shapes
|
219 |
+
print("\nModel Shapes:")
|
220 |
+
if hasattr(model, 'input_description'):
|
221 |
+
print(" Inputs:")
|
222 |
+
for name, desc in model.input_description.items():
|
223 |
+
print(f" {name}: {desc}")
|
224 |
+
if hasattr(model, 'output_description'):
|
225 |
+
print(" Outputs:")
|
226 |
+
for name, desc in model.output_description.items():
|
227 |
+
print(f" {name}: {desc}")
|
228 |
+
else:
|
229 |
+
print("\nWarning: No metadata found in model")
|
230 |
+
|
231 |
+
# Check if model directory name contains context length pattern (ctxXXX)
|
232 |
+
ctx_len = 512
|
233 |
+
if args.context_length is None:
|
234 |
+
import re
|
235 |
+
ctx_match = re.search(r'ctx(\d+)', str(args.d))
|
236 |
+
if ctx_match:
|
237 |
+
ctx_len0 = int(ctx_match.group(1))
|
238 |
+
if 512 <= ctx_len0 <= 8096:
|
239 |
+
ctx_len = ctx_len0
|
240 |
+
print(f"\nDetected context length {ctx_len} from directory name")
|
241 |
+
else:
|
242 |
+
print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
|
243 |
+
else:
|
244 |
+
ctx_len = args.context_length
|
245 |
+
|
246 |
+
# Use defaults
|
247 |
+
metadata['context_length'] = ctx_len
|
248 |
+
metadata['state_length'] = ctx_len
|
249 |
+
metadata['batch_size'] = 64
|
250 |
+
metadata['lut_bits'] = 4
|
251 |
+
metadata['num_chunks'] = 4
|
252 |
+
print("\nUsing default parameters:")
|
253 |
+
print(f" Context Length: {metadata['context_length']}")
|
254 |
+
print(f" State Length: {metadata['state_length']}")
|
255 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
256 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
257 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
258 |
+
return metadata
|
259 |
+
|
260 |
+
def load_models(args,metadata):
|
261 |
+
"""Load all required models and extract metadata."""
|
262 |
+
print("\nLoading models...")
|
263 |
+
|
264 |
+
try:
|
265 |
+
# Load embeddings model
|
266 |
+
print("\nLoading embeddings model...")
|
267 |
+
embed_path = parse_model_path(args.embed)
|
268 |
+
print(f"Loading from: {embed_path}")
|
269 |
+
embed_model = load_model(embed_path)
|
270 |
+
print("Embeddings model loaded successfully")
|
271 |
+
metadata = load_metadata(embed_model,args)
|
272 |
+
|
273 |
+
|
274 |
+
|
275 |
+
# Load LM head model
|
276 |
+
print("\nLoading LM head model...")
|
277 |
+
lmhead_path = parse_model_path(args.lmhead)
|
278 |
+
print(f"Loading from: {lmhead_path}")
|
279 |
+
lmhead_model = load_model(lmhead_path)
|
280 |
+
print("LM head model loaded successfully")
|
281 |
+
|
282 |
+
# Parse FFN path and find chunks if needed
|
283 |
+
print("\nLoading FFN+PREFILL model(s)...")
|
284 |
+
ffn_path = parse_model_path(args.ffn)
|
285 |
+
chunk_no, total_chunks = parse_ffn_filename(ffn_path)
|
286 |
+
|
287 |
+
ffn_models = []
|
288 |
+
if chunk_no and total_chunks:
|
289 |
+
print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
|
290 |
+
# Find and load all chunks
|
291 |
+
chunk_paths = find_all_chunks(ffn_path)
|
292 |
+
if len(chunk_paths) != total_chunks:
|
293 |
+
raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
|
294 |
+
|
295 |
+
for chunk_path in chunk_paths:
|
296 |
+
print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
|
297 |
+
try:
|
298 |
+
# For chunked models, we need both infer and prefill functions
|
299 |
+
ffn_models.append({
|
300 |
+
'infer': load_model(chunk_path, function_name='infer'),
|
301 |
+
'prefill': load_model(chunk_path, function_name='prefill')
|
302 |
+
})
|
303 |
+
print("Chunk loaded successfully")
|
304 |
+
except Exception as e:
|
305 |
+
print(f"Error loading chunk {chunk_path}: {str(e)}")
|
306 |
+
raise
|
307 |
+
metadata = load_metadata(ffn_models[0],args)
|
308 |
+
|
309 |
+
else:
|
310 |
+
print("\nLoading single FFN model...")
|
311 |
+
ffn_models.append(load_model(ffn_path))
|
312 |
+
print("FFN model loaded successfully")
|
313 |
+
|
314 |
+
return embed_model, ffn_models, lmhead_model, metadata
|
315 |
+
|
316 |
+
except Exception as e:
|
317 |
+
print(f"\nError loading models: {str(e)}")
|
318 |
+
print("\nPlease ensure all model files exist and are accessible.")
|
319 |
+
print("Expected files:")
|
320 |
+
print(f" Embeddings: {args.embed}")
|
321 |
+
print(f" LM Head: {args.lmhead}")
|
322 |
+
print(f" FFN: {args.ffn}")
|
323 |
+
raise
|
324 |
+
|
325 |
+
# At the top of the file, make this a default path
|
326 |
+
|
327 |
+
def initialize_tokenizer(model_path=None):
|
328 |
+
"""Initialize and configure the tokenizer."""
|
329 |
+
try:
|
330 |
+
|
331 |
+
|
332 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
333 |
+
str(model_path),
|
334 |
+
use_fast=False,
|
335 |
+
trust_remote_code=True
|
336 |
+
)
|
337 |
+
|
338 |
+
print("\nTokenizer Configuration:")
|
339 |
+
print(f"Tokenizer type: {type(tokenizer)}")
|
340 |
+
print(f"Tokenizer name: {tokenizer.__class__.__name__}")
|
341 |
+
print(f"Vocabulary size: {len(tokenizer)}")
|
342 |
+
print(f"Model max length: {tokenizer.model_max_length}")
|
343 |
+
|
344 |
+
if tokenizer.pad_token is None:
|
345 |
+
tokenizer.pad_token = tokenizer.eos_token
|
346 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
347 |
+
print("Set PAD token to EOS token")
|
348 |
+
|
349 |
+
tokenizer.padding_side = "left"
|
350 |
+
|
351 |
+
print(f"\nSpecial Tokens:")
|
352 |
+
print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
|
353 |
+
print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
|
354 |
+
print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
|
355 |
+
print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
|
356 |
+
|
357 |
+
return tokenizer
|
358 |
+
|
359 |
+
except Exception as e:
|
360 |
+
print(f"\nError: Failed to load tokenizer from {model_path}")
|
361 |
+
print(f"Error details: {str(e)}")
|
362 |
+
print(f"Error type: {type(e)}")
|
363 |
+
print("\nThis code requires a Llama 3.2 model for chat template functionality.")
|
364 |
+
print("Please provide the path to a Llama 3.2 model directory.")
|
365 |
+
import traceback
|
366 |
+
traceback.print_exc()
|
367 |
+
raise
|
368 |
+
|
369 |
+
|
370 |
+
|
371 |
+
def make_causal_mask(length, start):
|
372 |
+
"""Create causal attention mask."""
|
373 |
+
mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
|
374 |
+
row_indices = np.arange(length).reshape(length, 1)
|
375 |
+
col_indices = np.arange(length).reshape(1, length)
|
376 |
+
mask[:, :, col_indices <= (row_indices + start)] = 0
|
377 |
+
return mask
|
378 |
+
|
379 |
+
def run_prefill(embed_model, ffn_models, input_ids, context_pos, context_length, batch_size=64, state=None):
|
380 |
+
"""Run prefill on the input sequence."""
|
381 |
+
# Create causal mask
|
382 |
+
causal_mask = make_causal_mask(context_length, 0)
|
383 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
384 |
+
|
385 |
+
# Process in batches
|
386 |
+
batch_pos = 0
|
387 |
+
while batch_pos < context_pos:
|
388 |
+
batch_end = min(batch_pos + batch_size, context_pos)
|
389 |
+
current_batch_size = batch_end - batch_pos
|
390 |
+
|
391 |
+
# Get current batch
|
392 |
+
batch_input = input_ids[:, batch_pos:batch_end]
|
393 |
+
|
394 |
+
# Always pad to full batch size for prefill
|
395 |
+
batch_input = F.pad(
|
396 |
+
batch_input,
|
397 |
+
(0, batch_size - current_batch_size),
|
398 |
+
value=0
|
399 |
+
)
|
400 |
+
|
401 |
+
# Generate position IDs for full batch size
|
402 |
+
position_ids = torch.arange(batch_size, dtype=torch.int32) # Changed: Always use full batch size
|
403 |
+
batch_causal_mask = causal_mask[:, :, :batch_size, :] # Changed: Use full batch size
|
404 |
+
|
405 |
+
# Run embeddings
|
406 |
+
hidden_states = torch.from_numpy(
|
407 |
+
embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
|
408 |
+
)
|
409 |
+
|
410 |
+
# Run through FFN chunks with state
|
411 |
+
for ffn_model in ffn_models:
|
412 |
+
if isinstance(ffn_model, dict):
|
413 |
+
inputs = {
|
414 |
+
'hidden_states': hidden_states.numpy(), # [1, 64, hidden_size]
|
415 |
+
'position_ids': position_ids.numpy(), # [64]
|
416 |
+
'causal_mask': batch_causal_mask.numpy(), # [1, 1, 64, context_length]
|
417 |
+
'current_pos': np.array([batch_pos], dtype=np.int32) # [1]
|
418 |
+
}
|
419 |
+
output = ffn_model['prefill'].predict(inputs, state)
|
420 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
421 |
+
|
422 |
+
batch_pos = batch_end
|
423 |
+
|
424 |
+
return torch.tensor([context_pos], dtype=torch.int32)
|
425 |
+
|
426 |
+
def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state=None, temperature=0.0):
|
427 |
+
"""Generate the next token."""
|
428 |
+
# Get current token
|
429 |
+
current_token = input_ids[:, pos-1:pos] # [1, 1]
|
430 |
+
|
431 |
+
# Run embeddings
|
432 |
+
hidden_states = torch.from_numpy(
|
433 |
+
embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
|
434 |
+
) # [1, 1, hidden_size]
|
435 |
+
|
436 |
+
# Create masks
|
437 |
+
update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
|
438 |
+
update_mask[0, 0, pos-1, 0] = 1.0
|
439 |
+
position_ids = torch.tensor([pos-1], dtype=torch.int32) # [1]
|
440 |
+
causal_mask = make_causal_mask(context_length, 0)
|
441 |
+
causal_mask = torch.tensor(causal_mask[:, :, pos-1:pos, :], dtype=torch.float16) # [1, 1, 1, context_length]
|
442 |
+
|
443 |
+
# Run through FFN chunks with state
|
444 |
+
for ffn_model in ffn_models:
|
445 |
+
if isinstance(ffn_model, dict):
|
446 |
+
inputs = {
|
447 |
+
'hidden_states': hidden_states.numpy(),
|
448 |
+
'update_mask': update_mask.numpy(),
|
449 |
+
'position_ids': position_ids.numpy(),
|
450 |
+
'causal_mask': causal_mask.numpy(),
|
451 |
+
'current_pos': position_ids.numpy()
|
452 |
+
}
|
453 |
+
output = ffn_model['infer'].predict(inputs, state)
|
454 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
455 |
+
|
456 |
+
# Run LM head
|
457 |
+
lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
|
458 |
+
# Debug print
|
459 |
+
#print("\nLM Head output keys:", list(lm_output.keys()))
|
460 |
+
|
461 |
+
# Combine logits1-8 if they exist
|
462 |
+
if 'logits1' in lm_output:
|
463 |
+
# Concatenate all logits parts
|
464 |
+
logits_parts = []
|
465 |
+
for i in range(1, 9):
|
466 |
+
key = f'logits{i}'
|
467 |
+
if key in lm_output:
|
468 |
+
logits_parts.append(torch.from_numpy(lm_output[key]))
|
469 |
+
logits = torch.cat(logits_parts, dim=-1) # Concatenate along vocab dimension
|
470 |
+
else:
|
471 |
+
# Try output_logits as fallback
|
472 |
+
logits = torch.from_numpy(lm_output['output_logits'])
|
473 |
+
|
474 |
+
# Apply temperature and sample
|
475 |
+
if temperature > 0:
|
476 |
+
logits = logits / temperature
|
477 |
+
probs = F.softmax(logits[0, -1, :], dim=-1)
|
478 |
+
next_token = torch.multinomial(probs, num_samples=1).item()
|
479 |
+
else:
|
480 |
+
next_token = torch.argmax(logits[0, -1, :]).item()
|
481 |
+
|
482 |
+
return next_token
|
483 |
+
|
484 |
+
def create_unified_state(ffn_models, context_length):
|
485 |
+
"""Create unified KV cache state for transformer."""
|
486 |
+
if isinstance(ffn_models[0], dict):
|
487 |
+
# Use first FFN model's prefill function to create state
|
488 |
+
state = ffn_models[0]['prefill'].make_state()
|
489 |
+
print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
|
490 |
+
return state
|
491 |
+
else:
|
492 |
+
state = ffn_models[0].make_state()
|
493 |
+
print("\nCreated unified transformer state")
|
494 |
+
return state
|
495 |
+
|
496 |
+
def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, auto_prompt=None, warmup=False):
|
497 |
+
"""Interactive chat loop."""
|
498 |
+
context_length = metadata.get('context_length')
|
499 |
+
batch_size = metadata.get('batch_size', 64)
|
500 |
+
|
501 |
+
if not warmup:
|
502 |
+
print(f"\nUsing context length: {context_length}")
|
503 |
+
print("\nStarting chat session. Press Ctrl+D to exit.")
|
504 |
+
print("Type your message and press Enter to chat.")
|
505 |
+
|
506 |
+
# Check if tokenizer has chat template and if it works
|
507 |
+
has_chat_template = False
|
508 |
+
try:
|
509 |
+
# Test if chat template works
|
510 |
+
test_messages = [{"role": "user", "content": "test"}]
|
511 |
+
tokenizer.apply_chat_template(test_messages, return_tensors="pt")
|
512 |
+
has_chat_template = True
|
513 |
+
if not warmup:
|
514 |
+
print("\nUsing chat template for prompts")
|
515 |
+
except:
|
516 |
+
if not warmup:
|
517 |
+
print("\nUsing manual formatting for prompts")
|
518 |
+
|
519 |
+
conversation = []
|
520 |
+
|
521 |
+
try:
|
522 |
+
while True:
|
523 |
+
try:
|
524 |
+
if not warmup:
|
525 |
+
print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
|
526 |
+
if auto_prompt is not None:
|
527 |
+
user_input = auto_prompt
|
528 |
+
if not warmup:
|
529 |
+
print(user_input)
|
530 |
+
else:
|
531 |
+
user_input = input().strip()
|
532 |
+
except EOFError:
|
533 |
+
if not warmup:
|
534 |
+
print("\nExiting chat...")
|
535 |
+
break
|
536 |
+
|
537 |
+
if not user_input:
|
538 |
+
continue
|
539 |
+
|
540 |
+
# Format prompt based on tokenizer capabilities
|
541 |
+
if has_chat_template:
|
542 |
+
messages = [{"role": "user", "content": user_input}]
|
543 |
+
input_ids = tokenizer.apply_chat_template(
|
544 |
+
messages,
|
545 |
+
return_tensors="pt",
|
546 |
+
add_generation_prompt=True
|
547 |
+
).to(torch.int32)
|
548 |
+
else:
|
549 |
+
# Manual formatting for Llama models without chat template
|
550 |
+
formatted_prompt = f"[INST] {user_input} [/INST]"
|
551 |
+
input_ids = tokenizer(
|
552 |
+
formatted_prompt,
|
553 |
+
return_tensors="pt",
|
554 |
+
add_special_tokens=True
|
555 |
+
).input_ids.to(torch.int32)
|
556 |
+
|
557 |
+
context_pos = input_ids.size(1)
|
558 |
+
|
559 |
+
if not warmup:
|
560 |
+
print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
|
561 |
+
|
562 |
+
# Initialize token printer
|
563 |
+
token_printer = TokenPrinter(tokenizer)
|
564 |
+
tokens_generated = 0 # Track number of tokens
|
565 |
+
|
566 |
+
try:
|
567 |
+
# Start prefill timing
|
568 |
+
prefill_start = time.time()
|
569 |
+
|
570 |
+
# Run prefill with state
|
571 |
+
current_pos = run_prefill(
|
572 |
+
embed_model,
|
573 |
+
ffn_models,
|
574 |
+
input_ids,
|
575 |
+
context_pos,
|
576 |
+
context_length,
|
577 |
+
batch_size,
|
578 |
+
state
|
579 |
+
)
|
580 |
+
|
581 |
+
# Calculate prefill timing
|
582 |
+
prefill_time = time.time() - prefill_start
|
583 |
+
prefill_tokens = context_pos # Number of tokens in input
|
584 |
+
prefill_tokens_per_sec = prefill_tokens / prefill_time if prefill_time > 0 else 0
|
585 |
+
|
586 |
+
# Generation loop with state
|
587 |
+
input_ids = input_ids
|
588 |
+
pos = context_pos
|
589 |
+
inference_start = time.time()
|
590 |
+
inference_tokens = 0
|
591 |
+
|
592 |
+
while pos < context_length - 1:
|
593 |
+
# Generate next token
|
594 |
+
next_token = generate_next_token(
|
595 |
+
embed_model,
|
596 |
+
ffn_models,
|
597 |
+
lmhead_model,
|
598 |
+
input_ids,
|
599 |
+
pos,
|
600 |
+
context_length,
|
601 |
+
state
|
602 |
+
)
|
603 |
+
|
604 |
+
# Add token to sequence
|
605 |
+
if pos < input_ids.size(1):
|
606 |
+
input_ids[0, pos] = next_token
|
607 |
+
else:
|
608 |
+
input_ids = torch.cat([
|
609 |
+
input_ids,
|
610 |
+
torch.tensor([[next_token]], dtype=torch.int32)
|
611 |
+
], dim=1)
|
612 |
+
|
613 |
+
# Add to printer only if not in warmup
|
614 |
+
if not warmup:
|
615 |
+
token_printer.add_token(next_token)
|
616 |
+
token_printer.drain_buffer()
|
617 |
+
|
618 |
+
pos += 1
|
619 |
+
tokens_generated += 1
|
620 |
+
inference_tokens += 1
|
621 |
+
|
622 |
+
# Check limits
|
623 |
+
if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
|
624 |
+
break
|
625 |
+
|
626 |
+
if next_token == tokenizer.eos_token_id:
|
627 |
+
break
|
628 |
+
|
629 |
+
# Calculate inference timing
|
630 |
+
inference_time = time.time() - inference_start
|
631 |
+
inference_tokens_per_sec = inference_tokens / inference_time if inference_time > 0 else 0
|
632 |
+
|
633 |
+
# Get final response and add to conversation
|
634 |
+
if not warmup:
|
635 |
+
response = token_printer.stop()
|
636 |
+
# Print timing stats
|
637 |
+
prefill_ms = prefill_time * 1000 # Convert to milliseconds
|
638 |
+
print(f"\nPrefill: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s)")
|
639 |
+
print(f"Inference: {inference_tokens_per_sec:.1f} t/s")
|
640 |
+
print(f"Total: Generated {tokens_generated} tokens in {prefill_time + inference_time:.2f}s")
|
641 |
+
conversation.append({"role": "assistant", "content": response})
|
642 |
+
else:
|
643 |
+
token_printer.stop() # Clean up without printing stats
|
644 |
+
|
645 |
+
# Exit after one response in auto_prompt mode
|
646 |
+
if auto_prompt is not None:
|
647 |
+
break
|
648 |
+
|
649 |
+
except KeyboardInterrupt:
|
650 |
+
print("\nGeneration interrupted")
|
651 |
+
token_printer.stop()
|
652 |
+
continue
|
653 |
+
|
654 |
+
except Exception as e:
|
655 |
+
print(f"\nError in chat loop: {str(e)}")
|
656 |
+
import traceback
|
657 |
+
traceback.print_exc()
|
658 |
+
|
659 |
+
def parse_args():
|
660 |
+
parser = argparse.ArgumentParser(description='Chat with CoreML LLaMA (c) 2025 Anemll')
|
661 |
+
|
662 |
+
# Add meta.yaml option
|
663 |
+
parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
|
664 |
+
|
665 |
+
# Model paths
|
666 |
+
parser.add_argument('--d', '--dir', type=str, default='.',
|
667 |
+
help='Directory containing model files (default: current directory)')
|
668 |
+
parser.add_argument('--embed', type=str, required=False,
|
669 |
+
help='Path to embeddings model (relative to --dir)')
|
670 |
+
parser.add_argument('--ffn', type=str, required=False,
|
671 |
+
help='Path to FFN model (can be chunked, relative to --dir)')
|
672 |
+
parser.add_argument('--lmhead', type=str, required=False,
|
673 |
+
help='Path to LM head model (relative to --dir)')
|
674 |
+
parser.add_argument('--tokenizer', type=str, required=False,
|
675 |
+
help='Path to tokenizer')
|
676 |
+
|
677 |
+
# Add new argument for auto-generation
|
678 |
+
parser.add_argument('--prompt', type=str,
|
679 |
+
help='If specified, run once with this prompt and exit')
|
680 |
+
|
681 |
+
# Model configuration
|
682 |
+
parser.add_argument('--context-length', type=int,
|
683 |
+
help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
|
684 |
+
|
685 |
+
args = parser.parse_args()
|
686 |
+
|
687 |
+
# If meta.yaml is provided, load parameters from it
|
688 |
+
if args.meta:
|
689 |
+
try:
|
690 |
+
with open(args.meta, 'r') as f:
|
691 |
+
meta = yaml.safe_load(f)
|
692 |
+
params = meta['model_info']['parameters']
|
693 |
+
|
694 |
+
# Set model directory to meta.yaml directory if not specified
|
695 |
+
if not args.d or args.d == '.':
|
696 |
+
args.d = str(Path(args.meta).parent)
|
697 |
+
|
698 |
+
# Build model paths based on parameters
|
699 |
+
prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
|
700 |
+
lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
|
701 |
+
lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
|
702 |
+
num_chunks = int(params['num_chunks'])
|
703 |
+
|
704 |
+
# Set model paths if not specified
|
705 |
+
if not args.embed:
|
706 |
+
args.embed = f'{prefix}_embeddings'
|
707 |
+
if not args.lmhead:
|
708 |
+
args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
|
709 |
+
if not args.ffn:
|
710 |
+
args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
|
711 |
+
if not args.tokenizer:
|
712 |
+
args.tokenizer = args.d
|
713 |
+
|
714 |
+
# Set other parameters
|
715 |
+
args.context_length = int(params['context_length'])
|
716 |
+
args.batch_size = int(params['batch_size'])
|
717 |
+
args.num_chunks = num_chunks
|
718 |
+
|
719 |
+
print(f"\nLoaded parameters from {args.meta}:")
|
720 |
+
print(f" Context Length: {args.context_length}")
|
721 |
+
print(f" Batch Size: {args.batch_size}")
|
722 |
+
print(f" Num Chunks: {args.num_chunks}")
|
723 |
+
print(f" Models Directory: {args.d}")
|
724 |
+
print(f" Embeddings: {args.embed}")
|
725 |
+
print(f" LM Head: {args.lmhead}")
|
726 |
+
print(f" FFN: {args.ffn}")
|
727 |
+
|
728 |
+
except Exception as e:
|
729 |
+
print(f"\nError loading meta.yaml: {str(e)}")
|
730 |
+
sys.exit(1)
|
731 |
+
|
732 |
+
return args
|
733 |
+
|
734 |
+
def main():
|
735 |
+
args = parse_args()
|
736 |
+
|
737 |
+
# Convert directory to absolute path
|
738 |
+
model_dir = Path(args.d).resolve()
|
739 |
+
if not model_dir.exists():
|
740 |
+
print(f"\nError: Model directory not found: {model_dir}")
|
741 |
+
return 1
|
742 |
+
|
743 |
+
print(f"\nUsing model directory: {model_dir}")
|
744 |
+
print(f"Context length: {args.context_length}")
|
745 |
+
|
746 |
+
try:
|
747 |
+
# Update paths to be relative to model directory
|
748 |
+
args.embed = str(model_dir / args.embed)
|
749 |
+
args.ffn = str(model_dir / args.ffn)
|
750 |
+
args.lmhead = str(model_dir / args.lmhead)
|
751 |
+
|
752 |
+
# Handle tokenizer path separately since it's not relative to model_dir
|
753 |
+
if args.tokenizer is None:
|
754 |
+
args.tokenizer = str(model_dir)
|
755 |
+
|
756 |
+
if not Path(args.tokenizer).exists():
|
757 |
+
print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
|
758 |
+
return 1
|
759 |
+
|
760 |
+
args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
|
761 |
+
print(f"Using tokenizer path: {args.tokenizer}")
|
762 |
+
|
763 |
+
metadata = {}
|
764 |
+
# Load models and extract metadata
|
765 |
+
embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
|
766 |
+
|
767 |
+
print(f"\nMetadata befor args.context_length: {metadata}")
|
768 |
+
|
769 |
+
# Override context length from command line if provided
|
770 |
+
if args.context_length is not None:
|
771 |
+
metadata['context_length'] = args.context_length
|
772 |
+
metadata['state_length'] = args.context_length # Also update state_length
|
773 |
+
print(f"\nOverriding context length from command line: {args.context_length}")
|
774 |
+
|
775 |
+
print(f"\nMetadata after load_models: {metadata}")
|
776 |
+
|
777 |
+
# Load tokenizer with resolved path
|
778 |
+
tokenizer = initialize_tokenizer(args.tokenizer)
|
779 |
+
if tokenizer is None:
|
780 |
+
raise RuntimeError("Failed to initialize tokenizer")
|
781 |
+
|
782 |
+
# Create unified state once
|
783 |
+
state = create_unified_state(ffn_models, metadata['context_length'])
|
784 |
+
|
785 |
+
# Warmup runs to prevent Python GIL issues with CoreML !
|
786 |
+
for i in range(2):
|
787 |
+
chat_loop(
|
788 |
+
embed_model=embed_model,
|
789 |
+
ffn_models=ffn_models,
|
790 |
+
lmhead_model=lmhead_model,
|
791 |
+
tokenizer=tokenizer,
|
792 |
+
metadata=metadata,
|
793 |
+
state=state,
|
794 |
+
warmup=True,
|
795 |
+
auto_prompt="who are you?"
|
796 |
+
)
|
797 |
+
|
798 |
+
# Main run
|
799 |
+
chat_loop(
|
800 |
+
embed_model=embed_model,
|
801 |
+
ffn_models=ffn_models,
|
802 |
+
lmhead_model=lmhead_model,
|
803 |
+
tokenizer=tokenizer,
|
804 |
+
metadata=metadata,
|
805 |
+
state=state,
|
806 |
+
warmup=False,
|
807 |
+
auto_prompt=args.prompt
|
808 |
+
)
|
809 |
+
|
810 |
+
except Exception as e:
|
811 |
+
print(f"\nError: {str(e)}")
|
812 |
+
import traceback
|
813 |
+
traceback.print_exc()
|
814 |
+
return 1
|
815 |
+
|
816 |
+
return 0
|
817 |
+
|
818 |
+
if __name__ == "__main__":
|
819 |
+
exit(main())
|
chat_full.py
ADDED
@@ -0,0 +1,854 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# chat.py
|
2 |
+
#!/usr/bin/env python3
|
3 |
+
# chat.py
|
4 |
+
# Copyright (c) 2025 Anemll
|
5 |
+
# Licensed under the MIT License
|
6 |
+
|
7 |
+
import argparse
|
8 |
+
import os
|
9 |
+
import re
|
10 |
+
import glob
|
11 |
+
from pathlib import Path
|
12 |
+
import coremltools as ct
|
13 |
+
from transformers import LlamaTokenizer, AutoTokenizer
|
14 |
+
import torch
|
15 |
+
import torch.nn.functional as F
|
16 |
+
import numpy as np
|
17 |
+
import queue
|
18 |
+
import threading
|
19 |
+
import time
|
20 |
+
import yaml
|
21 |
+
import sys
|
22 |
+
|
23 |
+
# ANSI color codes
|
24 |
+
LIGHT_BLUE = "\033[94m"
|
25 |
+
DARK_BLUE = "\033[34m"
|
26 |
+
LIGHT_GREEN = "\033[92m"
|
27 |
+
RESET_COLOR = "\033[0m"
|
28 |
+
|
29 |
+
# Add at the top with other constants
|
30 |
+
WARMUP_TOKEN_LIMIT = 10 # Maximum tokens to generate during warmup
|
31 |
+
|
32 |
+
class TokenPrinter:
|
33 |
+
"""Handles background printing of generated tokens."""
|
34 |
+
def __init__(self, tokenizer):
|
35 |
+
self.tokenizer = tokenizer
|
36 |
+
self.token_queue = queue.Queue()
|
37 |
+
self.stop_event = threading.Event()
|
38 |
+
self.thread = None
|
39 |
+
self.buffer = ""
|
40 |
+
self.lock = threading.Lock()
|
41 |
+
self.thinking = True # Track if we're still in thinking mode
|
42 |
+
self.decoding_buffer = [] # Buffer for token IDs
|
43 |
+
# Timing and stats tracking
|
44 |
+
self.start_time = time.time()
|
45 |
+
self.token_count = 0
|
46 |
+
self.prefill_time = 0
|
47 |
+
self.inference_time = 0
|
48 |
+
self.context_pos = 0
|
49 |
+
self.start()
|
50 |
+
|
51 |
+
def start(self):
|
52 |
+
"""Start the printer thread."""
|
53 |
+
if self.thread is None:
|
54 |
+
self.thread = threading.Thread(target=self._print_worker)
|
55 |
+
self.thread.daemon = True
|
56 |
+
self.thread.start()
|
57 |
+
|
58 |
+
def add_token(self, token_id):
|
59 |
+
"""Add a token to the print queue."""
|
60 |
+
if not self.stop_event.is_set():
|
61 |
+
self.token_queue.put(token_id)
|
62 |
+
self.token_count += 1
|
63 |
+
|
64 |
+
def drain_buffer(self):
|
65 |
+
"""Decode token IDs from decoding_buffer in the main thread."""
|
66 |
+
if not self.decoding_buffer:
|
67 |
+
return
|
68 |
+
|
69 |
+
# Decode all tokens at once in the main thread
|
70 |
+
token_str = self.tokenizer.decode(self.decoding_buffer)
|
71 |
+
self.decoding_buffer.clear()
|
72 |
+
|
73 |
+
# Color-handling logic
|
74 |
+
if self.thinking and "</think>" in token_str:
|
75 |
+
self.thinking = False
|
76 |
+
parts = token_str.split("</think>")
|
77 |
+
if len(parts) > 0:
|
78 |
+
print(parts[0] + "</think>", end='', flush=True)
|
79 |
+
if len(parts) > 1:
|
80 |
+
print(LIGHT_BLUE + parts[1], end='', flush=True)
|
81 |
+
else:
|
82 |
+
if not self.thinking:
|
83 |
+
print(LIGHT_BLUE + token_str, end='', flush=True)
|
84 |
+
else:
|
85 |
+
print(token_str, end='', flush=True)
|
86 |
+
|
87 |
+
def _print_worker(self):
|
88 |
+
"""Worker thread that takes token_ids from the queue."""
|
89 |
+
while not self.stop_event.is_set():
|
90 |
+
try:
|
91 |
+
token_id = self.token_queue.get(timeout=0.01)
|
92 |
+
with self.lock:
|
93 |
+
self.decoding_buffer.append(token_id)
|
94 |
+
self.token_queue.task_done()
|
95 |
+
except queue.Empty:
|
96 |
+
continue
|
97 |
+
except Exception as e:
|
98 |
+
print(f"\nError: Token printer error: {str(e)}")
|
99 |
+
break
|
100 |
+
|
101 |
+
def stop(self):
|
102 |
+
"""Stop the printer thread."""
|
103 |
+
if self.thread and self.thread.is_alive():
|
104 |
+
self.stop_event.set()
|
105 |
+
try:
|
106 |
+
self.thread.join(timeout=1.0)
|
107 |
+
except Exception:
|
108 |
+
pass
|
109 |
+
print(RESET_COLOR) # Reset color at the end
|
110 |
+
return self.buffer
|
111 |
+
|
112 |
+
def set_timing(self, prefill_time, inference_time, context_pos):
|
113 |
+
"""Set timing information."""
|
114 |
+
self.prefill_time = prefill_time
|
115 |
+
self.inference_time = inference_time
|
116 |
+
self.context_pos = context_pos
|
117 |
+
|
118 |
+
def parse_model_path(path):
|
119 |
+
"""Parse model path and return full path with .mlmodelc or .mlpackage extension."""
|
120 |
+
path = Path(path)
|
121 |
+
|
122 |
+
# If path exists exactly as specified, return it
|
123 |
+
if path.exists():
|
124 |
+
return str(path)
|
125 |
+
|
126 |
+
# Try with both extensions
|
127 |
+
candidates = [
|
128 |
+
path, # Original path
|
129 |
+
path.with_suffix('.mlmodelc'), # With .mlmodelc
|
130 |
+
path.with_suffix('.mlpackage'), # With .mlpackage
|
131 |
+
Path(str(path) + '.mlmodelc'), # Handle case where extension is included
|
132 |
+
Path(str(path) + '.mlpackage')
|
133 |
+
]
|
134 |
+
|
135 |
+
# Try all possible paths
|
136 |
+
for candidate in candidates:
|
137 |
+
if candidate.exists():
|
138 |
+
print(f"Found model at: {candidate}")
|
139 |
+
return str(candidate)
|
140 |
+
|
141 |
+
# If we get here, no valid path was found
|
142 |
+
print("\nError: Model not found. Tried following paths:")
|
143 |
+
for candidate in candidates:
|
144 |
+
print(f" {candidate}")
|
145 |
+
raise FileNotFoundError(f"Model not found: {path}")
|
146 |
+
|
147 |
+
def parse_ffn_filename(path):
|
148 |
+
"""Parse FFN model filename to extract chunk information."""
|
149 |
+
path = Path(path)
|
150 |
+
pattern = r'FFN_PF.*_chunk_(\d+)of(\d+)'
|
151 |
+
match = re.search(pattern, path.name)
|
152 |
+
|
153 |
+
if match:
|
154 |
+
current_chunk = int(match.group(1))
|
155 |
+
total_chunks = int(match.group(2))
|
156 |
+
return current_chunk, total_chunks
|
157 |
+
return None, None
|
158 |
+
|
159 |
+
def find_all_chunks(base_path):
|
160 |
+
"""Find all chunk files matching the base FFN path pattern."""
|
161 |
+
path = Path(base_path)
|
162 |
+
pattern = re.sub(r'_chunk_\d+of\d+', '_chunk_*', str(path))
|
163 |
+
return sorted(glob.glob(pattern))
|
164 |
+
|
165 |
+
def load_model(path, function_name=None):
|
166 |
+
"""Load a CoreML model, handling both .mlmodelc and .mlpackage formats."""
|
167 |
+
path = Path(path)
|
168 |
+
compute_unit = ct.ComputeUnit.CPU_AND_NE
|
169 |
+
|
170 |
+
try:
|
171 |
+
if path.suffix == '.mlmodelc':
|
172 |
+
# For compiled models (.mlmodelc), use CompiledMLModel
|
173 |
+
if function_name:
|
174 |
+
return ct.models.CompiledMLModel(str(path), compute_unit, function_name=function_name)
|
175 |
+
else:
|
176 |
+
return ct.models.CompiledMLModel(str(path), compute_unit)
|
177 |
+
else:
|
178 |
+
# For packages (.mlpackage)
|
179 |
+
if function_name:
|
180 |
+
return ct.models.MLModel(str(path), function_name=function_name)
|
181 |
+
else:
|
182 |
+
return ct.models.MLModel(str(path))
|
183 |
+
|
184 |
+
except RuntimeError as e:
|
185 |
+
if "valid manifest does not exist" in str(e):
|
186 |
+
print(f"\nError: Could not load compiled model at {path}")
|
187 |
+
print("This might be because:")
|
188 |
+
print("1. The model is not properly compiled")
|
189 |
+
print("2. The model was compiled for a different OS version")
|
190 |
+
print("3. The model needs to be recompiled")
|
191 |
+
print("\nTry using the .mlpackage version instead, or recompile the model.")
|
192 |
+
raise
|
193 |
+
|
194 |
+
def load_metadata(model,args):
|
195 |
+
# Extract metadata and config parameters
|
196 |
+
metadata = {}
|
197 |
+
if hasattr(model, 'user_defined_metadata'):
|
198 |
+
meta = model.user_defined_metadata
|
199 |
+
|
200 |
+
# Extract key parameters with defaults
|
201 |
+
metadata['context_length'] = int(meta.get('com.anemll.context_length', 512))
|
202 |
+
metadata['state_length'] = int(meta.get('com.anemll.state_length', metadata['context_length'])) # Added state_length
|
203 |
+
metadata['batch_size'] = int(meta.get('com.anemll.batch_size', 64))
|
204 |
+
metadata['lut_bits'] = int(meta.get('com.anemll.lut_bits', 0))
|
205 |
+
metadata['num_chunks'] = int(meta.get('com.anemll.num_chunks', 1))
|
206 |
+
|
207 |
+
print("\nExtracted Parameters:")
|
208 |
+
print(f" Context Length: {metadata['context_length']}")
|
209 |
+
print(f" State Length: {metadata['state_length']}")
|
210 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
211 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
212 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
213 |
+
|
214 |
+
# Print model info
|
215 |
+
print("\nModel Info:")
|
216 |
+
if 'com.anemll.info' in meta:
|
217 |
+
print(f" {meta['com.anemll.info']}")
|
218 |
+
if 'com.github.apple.coremltools.version' in meta:
|
219 |
+
print(f" CoreML Tools: {meta['com.github.apple.coremltools.version']}")
|
220 |
+
|
221 |
+
# Print model input/output shapes
|
222 |
+
print("\nModel Shapes:")
|
223 |
+
if hasattr(model, 'input_description'):
|
224 |
+
print(" Inputs:")
|
225 |
+
for name, desc in model.input_description.items():
|
226 |
+
print(f" {name}: {desc}")
|
227 |
+
if hasattr(model, 'output_description'):
|
228 |
+
print(" Outputs:")
|
229 |
+
for name, desc in model.output_description.items():
|
230 |
+
print(f" {name}: {desc}")
|
231 |
+
else:
|
232 |
+
print("\nWarning: No metadata found in model")
|
233 |
+
|
234 |
+
# Check if model directory name contains context length pattern (ctxXXX)
|
235 |
+
ctx_len = 512
|
236 |
+
if args.context_length is None:
|
237 |
+
import re
|
238 |
+
ctx_match = re.search(r'ctx(\d+)', str(args.d))
|
239 |
+
if ctx_match:
|
240 |
+
ctx_len0 = int(ctx_match.group(1))
|
241 |
+
if 512 <= ctx_len0 <= 8096:
|
242 |
+
ctx_len = ctx_len0
|
243 |
+
print(f"\nDetected context length {ctx_len} from directory name")
|
244 |
+
else:
|
245 |
+
print(f"\nWarning: No context length found in directory {ctx_len} from directory name {args.d}")
|
246 |
+
else:
|
247 |
+
ctx_len = args.context_length
|
248 |
+
|
249 |
+
# Use defaults
|
250 |
+
metadata['context_length'] = ctx_len
|
251 |
+
metadata['state_length'] = ctx_len
|
252 |
+
metadata['batch_size'] = 64
|
253 |
+
metadata['lut_bits'] = 4
|
254 |
+
metadata['num_chunks'] = 4
|
255 |
+
print("\nUsing default parameters:")
|
256 |
+
print(f" Context Length: {metadata['context_length']}")
|
257 |
+
print(f" State Length: {metadata['state_length']}")
|
258 |
+
print(f" Prefill Batch Size: {metadata['batch_size']}")
|
259 |
+
print(f" LUT Bits: {metadata['lut_bits']}")
|
260 |
+
print(f" Number of Chunks: {metadata['num_chunks']}")
|
261 |
+
return metadata
|
262 |
+
|
263 |
+
def load_models(args,metadata):
|
264 |
+
"""Load all required models and extract metadata."""
|
265 |
+
print("\nLoading models...")
|
266 |
+
|
267 |
+
try:
|
268 |
+
# Load embeddings model
|
269 |
+
print("\nLoading embeddings model...")
|
270 |
+
embed_path = parse_model_path(args.embed)
|
271 |
+
print(f"Loading from: {embed_path}")
|
272 |
+
embed_model = load_model(embed_path)
|
273 |
+
print("Embeddings model loaded successfully")
|
274 |
+
metadata = load_metadata(embed_model,args)
|
275 |
+
|
276 |
+
|
277 |
+
|
278 |
+
# Load LM head model
|
279 |
+
print("\nLoading LM head model...")
|
280 |
+
lmhead_path = parse_model_path(args.lmhead)
|
281 |
+
print(f"Loading from: {lmhead_path}")
|
282 |
+
lmhead_model = load_model(lmhead_path)
|
283 |
+
print("LM head model loaded successfully")
|
284 |
+
|
285 |
+
# Parse FFN path and find chunks if needed
|
286 |
+
print("\nLoading FFN+PREFILL model(s)...")
|
287 |
+
ffn_path = parse_model_path(args.ffn)
|
288 |
+
chunk_no, total_chunks = parse_ffn_filename(ffn_path)
|
289 |
+
|
290 |
+
ffn_models = []
|
291 |
+
if chunk_no and total_chunks:
|
292 |
+
print(f"\nDetected chunked FFN+PREFILL model ({total_chunks} chunks)")
|
293 |
+
# Find and load all chunks
|
294 |
+
chunk_paths = find_all_chunks(ffn_path)
|
295 |
+
if len(chunk_paths) != total_chunks:
|
296 |
+
raise ValueError(f"Found {len(chunk_paths)} chunks but filename indicates {total_chunks} chunks")
|
297 |
+
|
298 |
+
for chunk_path in chunk_paths:
|
299 |
+
print(f"\nLoading FFN+PREFILL chunk: {Path(chunk_path).name}")
|
300 |
+
try:
|
301 |
+
# For chunked models, we need both infer and prefill functions
|
302 |
+
ffn_models.append({
|
303 |
+
'infer': load_model(chunk_path, function_name='infer'),
|
304 |
+
'prefill': load_model(chunk_path, function_name='prefill')
|
305 |
+
})
|
306 |
+
print("Chunk loaded successfully")
|
307 |
+
except Exception as e:
|
308 |
+
print(f"Error loading chunk {chunk_path}: {str(e)}")
|
309 |
+
raise
|
310 |
+
metadata = load_metadata(ffn_models[0],args)
|
311 |
+
|
312 |
+
else:
|
313 |
+
print("\nLoading single FFN model...")
|
314 |
+
ffn_models.append(load_model(ffn_path))
|
315 |
+
print("FFN model loaded successfully")
|
316 |
+
|
317 |
+
return embed_model, ffn_models, lmhead_model, metadata
|
318 |
+
|
319 |
+
except Exception as e:
|
320 |
+
print(f"\nError loading models: {str(e)}")
|
321 |
+
print("\nPlease ensure all model files exist and are accessible.")
|
322 |
+
print("Expected files:")
|
323 |
+
print(f" Embeddings: {args.embed}")
|
324 |
+
print(f" LM Head: {args.lmhead}")
|
325 |
+
print(f" FFN: {args.ffn}")
|
326 |
+
raise
|
327 |
+
|
328 |
+
# At the top of the file, make this a default path
|
329 |
+
|
330 |
+
def initialize_tokenizer(model_path=None):
|
331 |
+
"""Initialize and configure the tokenizer."""
|
332 |
+
try:
|
333 |
+
|
334 |
+
|
335 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
336 |
+
str(model_path),
|
337 |
+
use_fast=False,
|
338 |
+
trust_remote_code=True
|
339 |
+
)
|
340 |
+
|
341 |
+
print("\nTokenizer Configuration:")
|
342 |
+
print(f"Tokenizer type: {type(tokenizer)}")
|
343 |
+
print(f"Tokenizer name: {tokenizer.__class__.__name__}")
|
344 |
+
print(f"Vocabulary size: {len(tokenizer)}")
|
345 |
+
print(f"Model max length: {tokenizer.model_max_length}")
|
346 |
+
|
347 |
+
if tokenizer.pad_token is None:
|
348 |
+
tokenizer.pad_token = tokenizer.eos_token
|
349 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
350 |
+
print("Set PAD token to EOS token")
|
351 |
+
|
352 |
+
tokenizer.padding_side = "left"
|
353 |
+
|
354 |
+
print(f"\nSpecial Tokens:")
|
355 |
+
print(f"PAD token: '{tokenizer.pad_token}' (ID: {tokenizer.pad_token_id})")
|
356 |
+
print(f"EOS token: '{tokenizer.eos_token}' (ID: {tokenizer.eos_token_id})")
|
357 |
+
print(f"BOS token: '{tokenizer.bos_token}' (ID: {tokenizer.bos_token_id})")
|
358 |
+
print(f"UNK token: '{tokenizer.unk_token}' (ID: {tokenizer.unk_token_id})")
|
359 |
+
|
360 |
+
return tokenizer
|
361 |
+
|
362 |
+
except Exception as e:
|
363 |
+
print(f"\nError: Failed to load tokenizer from {model_path}")
|
364 |
+
print(f"Error details: {str(e)}")
|
365 |
+
print(f"Error type: {type(e)}")
|
366 |
+
print("\nThis code requires a Llama 3.2 model for chat template functionality.")
|
367 |
+
print("Please provide the path to a Llama 3.2 model directory.")
|
368 |
+
import traceback
|
369 |
+
traceback.print_exc()
|
370 |
+
raise
|
371 |
+
|
372 |
+
|
373 |
+
|
374 |
+
def make_causal_mask(length, start):
|
375 |
+
"""Create causal attention mask."""
|
376 |
+
mask = np.full((1, 1, length, length), -np.inf, dtype=np.float16)
|
377 |
+
row_indices = np.arange(length).reshape(length, 1)
|
378 |
+
col_indices = np.arange(length).reshape(1, length)
|
379 |
+
mask[:, :, col_indices <= (row_indices + start)] = 0
|
380 |
+
return mask
|
381 |
+
|
382 |
+
def run_prefill(embed_model, ffn_models, input_ids, current_pos, context_length, batch_size, state):
|
383 |
+
"""Run prefill on the input sequence."""
|
384 |
+
#print(f"[DEBUG] Running prefill from 0 to {current_pos}")
|
385 |
+
|
386 |
+
# Process in batches
|
387 |
+
batch_pos = 0
|
388 |
+
while batch_pos < current_pos:
|
389 |
+
batch_end = min(batch_pos + batch_size, current_pos)
|
390 |
+
current_batch_size = batch_end - batch_pos
|
391 |
+
|
392 |
+
#print(f"[DEBUG] Prefill batch {batch_pos}-{batch_end} (size={current_batch_size})")
|
393 |
+
|
394 |
+
# Get current batch
|
395 |
+
batch_input = input_ids[:, batch_pos:batch_end]
|
396 |
+
|
397 |
+
# Pad to full batch size
|
398 |
+
batch_input = F.pad(
|
399 |
+
batch_input,
|
400 |
+
(0, batch_size - current_batch_size),
|
401 |
+
value=0
|
402 |
+
)
|
403 |
+
|
404 |
+
# Generate position IDs for this batch
|
405 |
+
position_ids = torch.arange(batch_pos, batch_pos + batch_size, dtype=torch.int32)
|
406 |
+
|
407 |
+
# Create causal mask for this batch
|
408 |
+
causal_mask = make_causal_mask(context_length, 0) # Always start from 0 for prefill
|
409 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
410 |
+
batch_causal_mask = causal_mask[:, :, batch_pos:batch_pos + batch_size, :]
|
411 |
+
|
412 |
+
# Run embeddings
|
413 |
+
hidden_states = torch.from_numpy(
|
414 |
+
embed_model.predict({'input_ids': batch_input.numpy()})['hidden_states']
|
415 |
+
)
|
416 |
+
|
417 |
+
# Run through FFN chunks
|
418 |
+
for ffn_model in ffn_models:
|
419 |
+
if isinstance(ffn_model, dict):
|
420 |
+
inputs = {
|
421 |
+
'hidden_states': hidden_states.numpy(),
|
422 |
+
'position_ids': position_ids.numpy(),
|
423 |
+
'causal_mask': batch_causal_mask.numpy(),
|
424 |
+
'current_pos': np.array([batch_pos], dtype=np.int32)
|
425 |
+
}
|
426 |
+
output = ffn_model['prefill'].predict(inputs, state)
|
427 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
428 |
+
|
429 |
+
batch_pos = batch_end
|
430 |
+
|
431 |
+
return torch.tensor([current_pos], dtype=torch.int32)
|
432 |
+
|
433 |
+
def generate_next_token(embed_model, ffn_models, lmhead_model, input_ids, pos, context_length, state=None, temperature=0.0):
|
434 |
+
"""Generate the next token."""
|
435 |
+
# Get current token
|
436 |
+
current_token = input_ids[:, pos-1:pos]
|
437 |
+
|
438 |
+
# Run embeddings
|
439 |
+
hidden_states = torch.from_numpy(
|
440 |
+
embed_model.predict({'input_ids': current_token.numpy()})['hidden_states']
|
441 |
+
)
|
442 |
+
|
443 |
+
# Create masks
|
444 |
+
update_mask = torch.zeros((1, 1, context_length, 1), dtype=torch.float16)
|
445 |
+
update_mask[0, 0, pos-1, 0] = 1.0
|
446 |
+
position_ids = torch.tensor([pos-1], dtype=torch.int32)
|
447 |
+
|
448 |
+
# Create causal mask for current position
|
449 |
+
causal_mask = make_causal_mask(context_length, 0) # Always start from 0 for generation
|
450 |
+
single_causal_mask = torch.tensor(causal_mask[:, :, pos-1:pos, :], dtype=torch.float16)
|
451 |
+
|
452 |
+
# Run through FFN chunks
|
453 |
+
for ffn_model in ffn_models:
|
454 |
+
if isinstance(ffn_model, dict):
|
455 |
+
inputs = {
|
456 |
+
'hidden_states': hidden_states.numpy(),
|
457 |
+
'update_mask': update_mask.numpy(),
|
458 |
+
'position_ids': position_ids.numpy(),
|
459 |
+
'causal_mask': single_causal_mask.numpy(),
|
460 |
+
'current_pos': position_ids.numpy()
|
461 |
+
}
|
462 |
+
output = ffn_model['infer'].predict(inputs, state)
|
463 |
+
hidden_states = torch.from_numpy(output['output_hidden_states'])
|
464 |
+
|
465 |
+
# Run LM head and get next token
|
466 |
+
lm_output = lmhead_model.predict({'hidden_states': hidden_states.numpy()})
|
467 |
+
|
468 |
+
if 'logits1' in lm_output:
|
469 |
+
logits_parts = []
|
470 |
+
for i in range(1, 9):
|
471 |
+
key = f'logits{i}'
|
472 |
+
if key in lm_output:
|
473 |
+
logits_parts.append(torch.from_numpy(lm_output[key]))
|
474 |
+
logits = torch.cat(logits_parts, dim=-1)
|
475 |
+
else:
|
476 |
+
logits = torch.from_numpy(lm_output['output_logits'])
|
477 |
+
|
478 |
+
if temperature > 0:
|
479 |
+
logits = logits / temperature
|
480 |
+
probs = F.softmax(logits[0, -1, :], dim=-1)
|
481 |
+
next_token = torch.multinomial(probs, num_samples=1).item()
|
482 |
+
else:
|
483 |
+
next_token = torch.argmax(logits[0, -1, :]).item()
|
484 |
+
|
485 |
+
return next_token
|
486 |
+
|
487 |
+
def create_unified_state(ffn_models, context_length):
|
488 |
+
"""Create unified KV cache state for transformer."""
|
489 |
+
if isinstance(ffn_models[0], dict):
|
490 |
+
# Use first FFN model's prefill function to create state
|
491 |
+
state = ffn_models[0]['prefill'].make_state()
|
492 |
+
print(f"\nCreated unified transformer state for {len(ffn_models)} chunks")
|
493 |
+
return state
|
494 |
+
else:
|
495 |
+
state = ffn_models[0].make_state()
|
496 |
+
print("\nCreated unified transformer state")
|
497 |
+
return state
|
498 |
+
|
499 |
+
def get_user_input():
|
500 |
+
sys.stdout.write(f"\n{LIGHT_GREEN}You:{RESET_COLOR} ")
|
501 |
+
sys.stdout.flush()
|
502 |
+
line = sys.stdin.readline()
|
503 |
+
if not line:
|
504 |
+
raise EOFError
|
505 |
+
return line.rstrip('\n')
|
506 |
+
|
507 |
+
def chat_loop(embed_model, ffn_models, lmhead_model, tokenizer, metadata, state, auto_prompt=None, warmup=False):
|
508 |
+
"""Interactive chat loop."""
|
509 |
+
context_length = metadata.get('context_length')
|
510 |
+
batch_size = metadata.get('batch_size', 64)
|
511 |
+
|
512 |
+
if not warmup:
|
513 |
+
print(f"\nUsing context length: {context_length}")
|
514 |
+
print("\nStarting chat session. Press Ctrl+D to exit.")
|
515 |
+
print("Type your message and press Enter to chat.")
|
516 |
+
|
517 |
+
# Keep track of conversation history
|
518 |
+
conversation = []
|
519 |
+
|
520 |
+
try:
|
521 |
+
while True:
|
522 |
+
try:
|
523 |
+
if not warmup:
|
524 |
+
print(f"\n{LIGHT_GREEN}You:{RESET_COLOR}", end=' ', flush=True)
|
525 |
+
if auto_prompt is not None:
|
526 |
+
user_input = auto_prompt
|
527 |
+
if not warmup:
|
528 |
+
print(user_input)
|
529 |
+
else:
|
530 |
+
user_input = input().strip()
|
531 |
+
except EOFError:
|
532 |
+
if not warmup:
|
533 |
+
print("\nExiting chat...")
|
534 |
+
break
|
535 |
+
|
536 |
+
if not user_input:
|
537 |
+
continue
|
538 |
+
|
539 |
+
# Add user message to conversation
|
540 |
+
conversation.append({"role": "user", "content": user_input})
|
541 |
+
|
542 |
+
# Format using chat template with full history
|
543 |
+
base_input_ids = tokenizer.apply_chat_template(
|
544 |
+
conversation,
|
545 |
+
return_tensors="pt",
|
546 |
+
add_generation_prompt=True
|
547 |
+
).to(torch.int32)
|
548 |
+
|
549 |
+
# Check if we need to trim history
|
550 |
+
while base_input_ids.size(1) > context_length - 100: # Leave room for response
|
551 |
+
# Remove oldest message pair (user + assistant)
|
552 |
+
if len(conversation) > 2:
|
553 |
+
conversation = conversation[2:] # Remove oldest pair
|
554 |
+
base_input_ids = tokenizer.apply_chat_template(
|
555 |
+
conversation,
|
556 |
+
return_tensors="pt",
|
557 |
+
add_generation_prompt=True
|
558 |
+
).to(torch.int32)
|
559 |
+
else:
|
560 |
+
# If only current message remains and still too long, truncate
|
561 |
+
base_input_ids = base_input_ids[:, -context_length//2:]
|
562 |
+
break
|
563 |
+
|
564 |
+
context_pos = base_input_ids.size(1)
|
565 |
+
|
566 |
+
# Pad sequence to context_size
|
567 |
+
input_ids = F.pad(
|
568 |
+
base_input_ids,
|
569 |
+
(0, context_length - context_pos),
|
570 |
+
value=0
|
571 |
+
)
|
572 |
+
|
573 |
+
if not warmup:
|
574 |
+
print(f"\n{LIGHT_BLUE}Assistant:{RESET_COLOR}", end=' ', flush=True)
|
575 |
+
|
576 |
+
# Initialize token printer and collect response
|
577 |
+
token_printer = TokenPrinter(tokenizer)
|
578 |
+
response_tokens = []
|
579 |
+
generation_start_time = time.time()
|
580 |
+
|
581 |
+
try:
|
582 |
+
# Create initial causal mask
|
583 |
+
causal_mask = make_causal_mask(context_length, 0)
|
584 |
+
causal_mask = torch.tensor(causal_mask, dtype=torch.float16)
|
585 |
+
|
586 |
+
# Run prefill on entire context
|
587 |
+
current_pos = run_prefill(
|
588 |
+
embed_model,
|
589 |
+
ffn_models,
|
590 |
+
input_ids,
|
591 |
+
context_pos,
|
592 |
+
context_length,
|
593 |
+
batch_size,
|
594 |
+
state
|
595 |
+
)
|
596 |
+
#print(f"\n[DEBUG] After initial prefill - current_pos: {current_pos}")
|
597 |
+
|
598 |
+
# Generation loop
|
599 |
+
pos = context_pos
|
600 |
+
tokens_generated = 0
|
601 |
+
inference_start = time.time() # Start inference timing
|
602 |
+
|
603 |
+
while True:
|
604 |
+
# Check if we need to shift window
|
605 |
+
if pos >= context_length - 2:
|
606 |
+
# Calculate shift to maintain full batches
|
607 |
+
batch_size = metadata.get('batch_size', 64)
|
608 |
+
# Calculate max batches that fit in context
|
609 |
+
max_batches = context_length // batch_size
|
610 |
+
desired_batches = max(1, max_batches - 2) # Leave room for new tokens
|
611 |
+
new_size = min(desired_batches * batch_size, context_length - batch_size)
|
612 |
+
|
613 |
+
# Create shifted input_ids
|
614 |
+
tmp = torch.zeros((1, context_length), dtype=torch.int32)
|
615 |
+
tmp[:,0:new_size] = input_ids[:,pos-new_size:pos]
|
616 |
+
input_ids = tmp
|
617 |
+
|
618 |
+
# Reset state and run prefill
|
619 |
+
# keep the same state
|
620 |
+
#state = create_unified_state(ffn_models, context_length)
|
621 |
+
current_pos = run_prefill(
|
622 |
+
embed_model,
|
623 |
+
ffn_models,
|
624 |
+
input_ids,
|
625 |
+
new_size, # Prefill the entire shifted content
|
626 |
+
context_length,
|
627 |
+
batch_size,
|
628 |
+
state
|
629 |
+
)
|
630 |
+
|
631 |
+
# Start generating from the next position
|
632 |
+
pos = new_size # Don't back up, continue from where we left off
|
633 |
+
|
634 |
+
#print(f"\n[DEBUG] After shift - next token will be at pos {pos}")
|
635 |
+
#print(f"[DEBUG] Context before next token: {tokenizer.decode(input_ids[0, pos-40:pos])}")
|
636 |
+
|
637 |
+
window_shifted = True
|
638 |
+
|
639 |
+
# Generate next token
|
640 |
+
next_token = generate_next_token(
|
641 |
+
embed_model,
|
642 |
+
ffn_models,
|
643 |
+
lmhead_model,
|
644 |
+
input_ids,
|
645 |
+
pos,
|
646 |
+
context_length,
|
647 |
+
state
|
648 |
+
)
|
649 |
+
|
650 |
+
# Add token
|
651 |
+
input_ids[0, pos] = next_token
|
652 |
+
if not warmup:
|
653 |
+
token_printer.add_token(next_token)
|
654 |
+
token_printer.drain_buffer()
|
655 |
+
response_tokens.append(next_token)
|
656 |
+
|
657 |
+
pos += 1
|
658 |
+
tokens_generated += 1
|
659 |
+
|
660 |
+
# In warmup mode, limit tokens
|
661 |
+
if warmup and tokens_generated >= WARMUP_TOKEN_LIMIT:
|
662 |
+
break
|
663 |
+
|
664 |
+
if next_token == tokenizer.eos_token_id:
|
665 |
+
break
|
666 |
+
|
667 |
+
inference_time = time.time() - inference_start # Calculate inference time
|
668 |
+
|
669 |
+
# Add assistant response to conversation
|
670 |
+
response_text = token_printer.stop()
|
671 |
+
conversation.append({"role": "assistant", "content": response_text})
|
672 |
+
|
673 |
+
# Print stats only if not in warmup
|
674 |
+
if not warmup:
|
675 |
+
total_time = time.time() - generation_start_time
|
676 |
+
prefill_time = total_time - inference_time
|
677 |
+
inference_tokens_per_sec = len(response_tokens) / inference_time if inference_time > 0 else 0
|
678 |
+
prefill_ms = prefill_time * 1000
|
679 |
+
prefill_tokens_per_sec = context_pos / prefill_time if prefill_time > 0 else 0
|
680 |
+
print(f"{DARK_BLUE}{inference_tokens_per_sec:.1f} t/s, "
|
681 |
+
f"TTFT: {prefill_ms:.1f}ms ({prefill_tokens_per_sec:.1f} t/s), "
|
682 |
+
f"{len(response_tokens)} tokens{RESET_COLOR}")
|
683 |
+
|
684 |
+
if auto_prompt is not None:
|
685 |
+
break
|
686 |
+
|
687 |
+
except KeyboardInterrupt:
|
688 |
+
if not warmup:
|
689 |
+
print("\nGeneration interrupted")
|
690 |
+
token_printer.stop()
|
691 |
+
continue
|
692 |
+
|
693 |
+
except Exception as e:
|
694 |
+
if not warmup:
|
695 |
+
print(f"\nError in chat loop: {str(e)}")
|
696 |
+
import traceback
|
697 |
+
traceback.print_exc()
|
698 |
+
|
699 |
+
def main():
|
700 |
+
parser = argparse.ArgumentParser(description='Full Chat with CoreML LLaMA with context window shifting (c) 2025 Anemll')
|
701 |
+
|
702 |
+
# Add meta.yaml option
|
703 |
+
parser.add_argument('--meta', type=str, help='Path to meta.yaml to load all parameters')
|
704 |
+
|
705 |
+
# Add existing arguments
|
706 |
+
parser.add_argument('--d', '--dir', type=str, default='.',
|
707 |
+
help='Directory containing model files (default: current directory)')
|
708 |
+
parser.add_argument('--embed', type=str, required=False,
|
709 |
+
help='Path to embeddings model (relative to --dir)')
|
710 |
+
parser.add_argument('--ffn', type=str, required=False,
|
711 |
+
help='Path to FFN model (can be chunked, relative to --dir)')
|
712 |
+
parser.add_argument('--lmhead', type=str, required=False,
|
713 |
+
help='Path to LM head model (relative to --dir)')
|
714 |
+
parser.add_argument('--tokenizer', type=str, required=False,
|
715 |
+
help='Path to tokenizer')
|
716 |
+
|
717 |
+
# Add new argument for auto-generation
|
718 |
+
parser.add_argument('--prompt', type=str,
|
719 |
+
help='If specified, run once with this prompt and exit')
|
720 |
+
|
721 |
+
# Model configuration
|
722 |
+
parser.add_argument('--context-length', type=int,
|
723 |
+
help='Context length for the model (default: 512), if not provided, it will be detected from the model directory name ctxNUMBER')
|
724 |
+
|
725 |
+
args = parser.parse_args()
|
726 |
+
|
727 |
+
# If meta.yaml is provided, load parameters from it
|
728 |
+
if args.meta:
|
729 |
+
try:
|
730 |
+
with open(args.meta, 'r') as f:
|
731 |
+
meta = yaml.safe_load(f)
|
732 |
+
params = meta['model_info']['parameters']
|
733 |
+
|
734 |
+
# Set model directory to meta.yaml directory if not specified
|
735 |
+
if not args.d or args.d == '.':
|
736 |
+
args.d = str(Path(args.meta).parent)
|
737 |
+
|
738 |
+
# Build model paths based on parameters
|
739 |
+
prefix = params.get('model_prefix', 'llama') # Default to 'llama' if not specified
|
740 |
+
lut_ffn = f"_lut{params['lut_ffn']}" if params['lut_ffn'] != 'none' else ''
|
741 |
+
lut_lmhead = f"_lut{params['lut_lmhead']}" if params['lut_lmhead'] != 'none' else ''
|
742 |
+
num_chunks = int(params['num_chunks'])
|
743 |
+
|
744 |
+
# Set model paths if not specified
|
745 |
+
if not args.embed:
|
746 |
+
args.embed = f'{prefix}_embeddings'
|
747 |
+
if not args.lmhead:
|
748 |
+
args.lmhead = f'{prefix}_lm_head{lut_lmhead}'
|
749 |
+
if not args.ffn:
|
750 |
+
args.ffn = f'{prefix}_FFN_PF{lut_ffn}_chunk_01of{num_chunks:02d}'
|
751 |
+
if not args.tokenizer:
|
752 |
+
args.tokenizer = args.d
|
753 |
+
|
754 |
+
# Set other parameters
|
755 |
+
args.context_length = int(params['context_length'])
|
756 |
+
args.batch_size = int(params['batch_size'])
|
757 |
+
args.num_chunks = num_chunks
|
758 |
+
|
759 |
+
print(f"\nLoaded parameters from {args.meta}:")
|
760 |
+
print(f" Context Length: {args.context_length}")
|
761 |
+
print(f" Batch Size: {args.batch_size}")
|
762 |
+
print(f" Num Chunks: {args.num_chunks}")
|
763 |
+
print(f" Models Directory: {args.d}")
|
764 |
+
print(f" Embeddings: {args.embed}")
|
765 |
+
print(f" LM Head: {args.lmhead}")
|
766 |
+
print(f" FFN: {args.ffn}")
|
767 |
+
|
768 |
+
except Exception as e:
|
769 |
+
print(f"\nError loading meta.yaml: {str(e)}")
|
770 |
+
sys.exit(1)
|
771 |
+
|
772 |
+
# Convert directory to absolute path
|
773 |
+
model_dir = Path(args.d).resolve()
|
774 |
+
if not model_dir.exists():
|
775 |
+
print(f"\nError: Model directory not found: {model_dir}")
|
776 |
+
return 1
|
777 |
+
|
778 |
+
print(f"\nUsing model directory: {model_dir}")
|
779 |
+
print(f"Context length: {args.context_length}")
|
780 |
+
|
781 |
+
try:
|
782 |
+
# Update paths to be relative to model directory
|
783 |
+
args.embed = str(model_dir / args.embed)
|
784 |
+
args.ffn = str(model_dir / args.ffn)
|
785 |
+
args.lmhead = str(model_dir / args.lmhead)
|
786 |
+
|
787 |
+
# Handle tokenizer path separately since it's not relative to model_dir
|
788 |
+
if args.tokenizer is None:
|
789 |
+
args.tokenizer = str(model_dir)
|
790 |
+
|
791 |
+
if not Path(args.tokenizer).exists():
|
792 |
+
print(f"\nError: Tokenizer directory not found: {args.tokenizer}")
|
793 |
+
return 1
|
794 |
+
|
795 |
+
args.tokenizer = str(Path(args.tokenizer).resolve()) # Convert to absolute path
|
796 |
+
print(f"Using tokenizer path: {args.tokenizer}")
|
797 |
+
|
798 |
+
metadata = {}
|
799 |
+
# Load models and extract metadata
|
800 |
+
embed_model, ffn_models, lmhead_model, metadata = load_models(args,metadata)
|
801 |
+
|
802 |
+
print(f"\nMetadata befor args.context_length: {metadata}")
|
803 |
+
|
804 |
+
# Override context length from command line if provided
|
805 |
+
if args.context_length is not None:
|
806 |
+
metadata['context_length'] = args.context_length
|
807 |
+
metadata['state_length'] = args.context_length # Also update state_length
|
808 |
+
print(f"\nOverriding context length from command line: {args.context_length}")
|
809 |
+
|
810 |
+
print(f"\nMetadata after load_models: {metadata}")
|
811 |
+
|
812 |
+
# Load tokenizer with resolved path
|
813 |
+
tokenizer = initialize_tokenizer(args.tokenizer)
|
814 |
+
if tokenizer is None:
|
815 |
+
raise RuntimeError("Failed to initialize tokenizer")
|
816 |
+
|
817 |
+
# Create unified state once
|
818 |
+
state = create_unified_state(ffn_models, metadata['context_length'])
|
819 |
+
|
820 |
+
# Warmup runs to prevent Python GIL issues with CoreML !
|
821 |
+
for i in range(2):
|
822 |
+
chat_loop(
|
823 |
+
embed_model=embed_model,
|
824 |
+
ffn_models=ffn_models,
|
825 |
+
lmhead_model=lmhead_model,
|
826 |
+
tokenizer=tokenizer,
|
827 |
+
metadata=metadata,
|
828 |
+
state=state, # Pass the state
|
829 |
+
warmup=True,
|
830 |
+
auto_prompt="who are you?"
|
831 |
+
)
|
832 |
+
|
833 |
+
# Main run
|
834 |
+
chat_loop(
|
835 |
+
embed_model=embed_model,
|
836 |
+
ffn_models=ffn_models,
|
837 |
+
lmhead_model=lmhead_model,
|
838 |
+
tokenizer=tokenizer,
|
839 |
+
metadata=metadata,
|
840 |
+
state=state, # Pass the state
|
841 |
+
warmup=False,
|
842 |
+
auto_prompt=args.prompt
|
843 |
+
)
|
844 |
+
|
845 |
+
except Exception as e:
|
846 |
+
print(f"\nError: {str(e)}")
|
847 |
+
import traceback
|
848 |
+
traceback.print_exc()
|
849 |
+
return 1
|
850 |
+
|
851 |
+
return 0
|
852 |
+
|
853 |
+
if __name__ == "__main__":
|
854 |
+
exit(main())
|
llama_FFN_PF_lut4_chunk_01of02.mlmodelc.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:24adfe5728757c3b77317864902c81a418e2ae9c52fdc0eeab85c3c84d05483c
|
3 |
+
size 680819791
|
llama_FFN_PF_lut4_chunk_02of02.mlmodelc.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:10c5edc80b71059127fdbd78273a9708c52c64eeb47a56b4dabe5cefb4c13f44
|
3 |
+
size 680975559
|
llama_embeddings.mlmodelc.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c657fb17645f1ad213e1bad48d7200f92704f38d7bdfe62e6d85d4cd2ab53868
|
3 |
+
size 605473094
|
llama_lm_head_lut4.mlmodelc.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:31dab6e09a95970f10e6981b597d69b029414b92bd1c44e95499d3b50ba1303b
|
3 |
+
size 605474513
|
meta.yaml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
model_info:
|
2 |
+
name: anemll-Meta-Llama-3.2-3B-ctx512
|
3 |
+
version: 0.1.1
|
4 |
+
description: |
|
5 |
+
Demonstarates running Meta-Llama-3.2-3B on Apple Neural Engine
|
6 |
+
Context length: 512
|
7 |
+
Batch size: 64
|
8 |
+
Chunks: 2
|
9 |
+
license: MIT
|
10 |
+
author: Anemll
|
11 |
+
framework: Core ML
|
12 |
+
language: Python
|
13 |
+
parameters:
|
14 |
+
context_length: 512
|
15 |
+
batch_size: 64
|
16 |
+
lut_embeddings: none
|
17 |
+
lut_ffn: 4
|
18 |
+
lut_lmhead: 4
|
19 |
+
num_chunks: 2
|
20 |
+
model_prefix: llama
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,2062 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"128000": {
|
4 |
+
"content": "<|begin_of_text|>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"128001": {
|
12 |
+
"content": "<|end_of_text|>",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"128002": {
|
20 |
+
"content": "<|reserved_special_token_0|>",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"128003": {
|
28 |
+
"content": "<|reserved_special_token_1|>",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"128004": {
|
36 |
+
"content": "<|finetune_right_pad_id|>",
|
37 |
+
"lstrip": false,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
},
|
43 |
+
"128005": {
|
44 |
+
"content": "<|reserved_special_token_2|>",
|
45 |
+
"lstrip": false,
|
46 |
+
"normalized": false,
|
47 |
+
"rstrip": false,
|
48 |
+
"single_word": false,
|
49 |
+
"special": true
|
50 |
+
},
|
51 |
+
"128006": {
|
52 |
+
"content": "<|start_header_id|>",
|
53 |
+
"lstrip": false,
|
54 |
+
"normalized": false,
|
55 |
+
"rstrip": false,
|
56 |
+
"single_word": false,
|
57 |
+
"special": true
|
58 |
+
},
|
59 |
+
"128007": {
|
60 |
+
"content": "<|end_header_id|>",
|
61 |
+
"lstrip": false,
|
62 |
+
"normalized": false,
|
63 |
+
"rstrip": false,
|
64 |
+
"single_word": false,
|
65 |
+
"special": true
|
66 |
+
},
|
67 |
+
"128008": {
|
68 |
+
"content": "<|eom_id|>",
|
69 |
+
"lstrip": false,
|
70 |
+
"normalized": false,
|
71 |
+
"rstrip": false,
|
72 |
+
"single_word": false,
|
73 |
+
"special": true
|
74 |
+
},
|
75 |
+
"128009": {
|
76 |
+
"content": "<|eot_id|>",
|
77 |
+
"lstrip": false,
|
78 |
+
"normalized": false,
|
79 |
+
"rstrip": false,
|
80 |
+
"single_word": false,
|
81 |
+
"special": true
|
82 |
+
},
|
83 |
+
"128010": {
|
84 |
+
"content": "<|python_tag|>",
|
85 |
+
"lstrip": false,
|
86 |
+
"normalized": false,
|
87 |
+
"rstrip": false,
|
88 |
+
"single_word": false,
|
89 |
+
"special": true
|
90 |
+
},
|
91 |
+
"128011": {
|
92 |
+
"content": "<|reserved_special_token_3|>",
|
93 |
+
"lstrip": false,
|
94 |
+
"normalized": false,
|
95 |
+
"rstrip": false,
|
96 |
+
"single_word": false,
|
97 |
+
"special": true
|
98 |
+
},
|
99 |
+
"128012": {
|
100 |
+
"content": "<|reserved_special_token_4|>",
|
101 |
+
"lstrip": false,
|
102 |
+
"normalized": false,
|
103 |
+
"rstrip": false,
|
104 |
+
"single_word": false,
|
105 |
+
"special": true
|
106 |
+
},
|
107 |
+
"128013": {
|
108 |
+
"content": "<|reserved_special_token_5|>",
|
109 |
+
"lstrip": false,
|
110 |
+
"normalized": false,
|
111 |
+
"rstrip": false,
|
112 |
+
"single_word": false,
|
113 |
+
"special": true
|
114 |
+
},
|
115 |
+
"128014": {
|
116 |
+
"content": "<|reserved_special_token_6|>",
|
117 |
+
"lstrip": false,
|
118 |
+
"normalized": false,
|
119 |
+
"rstrip": false,
|
120 |
+
"single_word": false,
|
121 |
+
"special": true
|
122 |
+
},
|
123 |
+
"128015": {
|
124 |
+
"content": "<|reserved_special_token_7|>",
|
125 |
+
"lstrip": false,
|
126 |
+
"normalized": false,
|
127 |
+
"rstrip": false,
|
128 |
+
"single_word": false,
|
129 |
+
"special": true
|
130 |
+
},
|
131 |
+
"128016": {
|
132 |
+
"content": "<|reserved_special_token_8|>",
|
133 |
+
"lstrip": false,
|
134 |
+
"normalized": false,
|
135 |
+
"rstrip": false,
|
136 |
+
"single_word": false,
|
137 |
+
"special": true
|
138 |
+
},
|
139 |
+
"128017": {
|
140 |
+
"content": "<|reserved_special_token_9|>",
|
141 |
+
"lstrip": false,
|
142 |
+
"normalized": false,
|
143 |
+
"rstrip": false,
|
144 |
+
"single_word": false,
|
145 |
+
"special": true
|
146 |
+
},
|
147 |
+
"128018": {
|
148 |
+
"content": "<|reserved_special_token_10|>",
|
149 |
+
"lstrip": false,
|
150 |
+
"normalized": false,
|
151 |
+
"rstrip": false,
|
152 |
+
"single_word": false,
|
153 |
+
"special": true
|
154 |
+
},
|
155 |
+
"128019": {
|
156 |
+
"content": "<|reserved_special_token_11|>",
|
157 |
+
"lstrip": false,
|
158 |
+
"normalized": false,
|
159 |
+
"rstrip": false,
|
160 |
+
"single_word": false,
|
161 |
+
"special": true
|
162 |
+
},
|
163 |
+
"128020": {
|
164 |
+
"content": "<|reserved_special_token_12|>",
|
165 |
+
"lstrip": false,
|
166 |
+
"normalized": false,
|
167 |
+
"rstrip": false,
|
168 |
+
"single_word": false,
|
169 |
+
"special": true
|
170 |
+
},
|
171 |
+
"128021": {
|
172 |
+
"content": "<|reserved_special_token_13|>",
|
173 |
+
"lstrip": false,
|
174 |
+
"normalized": false,
|
175 |
+
"rstrip": false,
|
176 |
+
"single_word": false,
|
177 |
+
"special": true
|
178 |
+
},
|
179 |
+
"128022": {
|
180 |
+
"content": "<|reserved_special_token_14|>",
|
181 |
+
"lstrip": false,
|
182 |
+
"normalized": false,
|
183 |
+
"rstrip": false,
|
184 |
+
"single_word": false,
|
185 |
+
"special": true
|
186 |
+
},
|
187 |
+
"128023": {
|
188 |
+
"content": "<|reserved_special_token_15|>",
|
189 |
+
"lstrip": false,
|
190 |
+
"normalized": false,
|
191 |
+
"rstrip": false,
|
192 |
+
"single_word": false,
|
193 |
+
"special": true
|
194 |
+
},
|
195 |
+
"128024": {
|
196 |
+
"content": "<|reserved_special_token_16|>",
|
197 |
+
"lstrip": false,
|
198 |
+
"normalized": false,
|
199 |
+
"rstrip": false,
|
200 |
+
"single_word": false,
|
201 |
+
"special": true
|
202 |
+
},
|
203 |
+
"128025": {
|
204 |
+
"content": "<|reserved_special_token_17|>",
|
205 |
+
"lstrip": false,
|
206 |
+
"normalized": false,
|
207 |
+
"rstrip": false,
|
208 |
+
"single_word": false,
|
209 |
+
"special": true
|
210 |
+
},
|
211 |
+
"128026": {
|
212 |
+
"content": "<|reserved_special_token_18|>",
|
213 |
+
"lstrip": false,
|
214 |
+
"normalized": false,
|
215 |
+
"rstrip": false,
|
216 |
+
"single_word": false,
|
217 |
+
"special": true
|
218 |
+
},
|
219 |
+
"128027": {
|
220 |
+
"content": "<|reserved_special_token_19|>",
|
221 |
+
"lstrip": false,
|
222 |
+
"normalized": false,
|
223 |
+
"rstrip": false,
|
224 |
+
"single_word": false,
|
225 |
+
"special": true
|
226 |
+
},
|
227 |
+
"128028": {
|
228 |
+
"content": "<|reserved_special_token_20|>",
|
229 |
+
"lstrip": false,
|
230 |
+
"normalized": false,
|
231 |
+
"rstrip": false,
|
232 |
+
"single_word": false,
|
233 |
+
"special": true
|
234 |
+
},
|
235 |
+
"128029": {
|
236 |
+
"content": "<|reserved_special_token_21|>",
|
237 |
+
"lstrip": false,
|
238 |
+
"normalized": false,
|
239 |
+
"rstrip": false,
|
240 |
+
"single_word": false,
|
241 |
+
"special": true
|
242 |
+
},
|
243 |
+
"128030": {
|
244 |
+
"content": "<|reserved_special_token_22|>",
|
245 |
+
"lstrip": false,
|
246 |
+
"normalized": false,
|
247 |
+
"rstrip": false,
|
248 |
+
"single_word": false,
|
249 |
+
"special": true
|
250 |
+
},
|
251 |
+
"128031": {
|
252 |
+
"content": "<|reserved_special_token_23|>",
|
253 |
+
"lstrip": false,
|
254 |
+
"normalized": false,
|
255 |
+
"rstrip": false,
|
256 |
+
"single_word": false,
|
257 |
+
"special": true
|
258 |
+
},
|
259 |
+
"128032": {
|
260 |
+
"content": "<|reserved_special_token_24|>",
|
261 |
+
"lstrip": false,
|
262 |
+
"normalized": false,
|
263 |
+
"rstrip": false,
|
264 |
+
"single_word": false,
|
265 |
+
"special": true
|
266 |
+
},
|
267 |
+
"128033": {
|
268 |
+
"content": "<|reserved_special_token_25|>",
|
269 |
+
"lstrip": false,
|
270 |
+
"normalized": false,
|
271 |
+
"rstrip": false,
|
272 |
+
"single_word": false,
|
273 |
+
"special": true
|
274 |
+
},
|
275 |
+
"128034": {
|
276 |
+
"content": "<|reserved_special_token_26|>",
|
277 |
+
"lstrip": false,
|
278 |
+
"normalized": false,
|
279 |
+
"rstrip": false,
|
280 |
+
"single_word": false,
|
281 |
+
"special": true
|
282 |
+
},
|
283 |
+
"128035": {
|
284 |
+
"content": "<|reserved_special_token_27|>",
|
285 |
+
"lstrip": false,
|
286 |
+
"normalized": false,
|
287 |
+
"rstrip": false,
|
288 |
+
"single_word": false,
|
289 |
+
"special": true
|
290 |
+
},
|
291 |
+
"128036": {
|
292 |
+
"content": "<|reserved_special_token_28|>",
|
293 |
+
"lstrip": false,
|
294 |
+
"normalized": false,
|
295 |
+
"rstrip": false,
|
296 |
+
"single_word": false,
|
297 |
+
"special": true
|
298 |
+
},
|
299 |
+
"128037": {
|
300 |
+
"content": "<|reserved_special_token_29|>",
|
301 |
+
"lstrip": false,
|
302 |
+
"normalized": false,
|
303 |
+
"rstrip": false,
|
304 |
+
"single_word": false,
|
305 |
+
"special": true
|
306 |
+
},
|
307 |
+
"128038": {
|
308 |
+
"content": "<|reserved_special_token_30|>",
|
309 |
+
"lstrip": false,
|
310 |
+
"normalized": false,
|
311 |
+
"rstrip": false,
|
312 |
+
"single_word": false,
|
313 |
+
"special": true
|
314 |
+
},
|
315 |
+
"128039": {
|
316 |
+
"content": "<|reserved_special_token_31|>",
|
317 |
+
"lstrip": false,
|
318 |
+
"normalized": false,
|
319 |
+
"rstrip": false,
|
320 |
+
"single_word": false,
|
321 |
+
"special": true
|
322 |
+
},
|
323 |
+
"128040": {
|
324 |
+
"content": "<|reserved_special_token_32|>",
|
325 |
+
"lstrip": false,
|
326 |
+
"normalized": false,
|
327 |
+
"rstrip": false,
|
328 |
+
"single_word": false,
|
329 |
+
"special": true
|
330 |
+
},
|
331 |
+
"128041": {
|
332 |
+
"content": "<|reserved_special_token_33|>",
|
333 |
+
"lstrip": false,
|
334 |
+
"normalized": false,
|
335 |
+
"rstrip": false,
|
336 |
+
"single_word": false,
|
337 |
+
"special": true
|
338 |
+
},
|
339 |
+
"128042": {
|
340 |
+
"content": "<|reserved_special_token_34|>",
|
341 |
+
"lstrip": false,
|
342 |
+
"normalized": false,
|
343 |
+
"rstrip": false,
|
344 |
+
"single_word": false,
|
345 |
+
"special": true
|
346 |
+
},
|
347 |
+
"128043": {
|
348 |
+
"content": "<|reserved_special_token_35|>",
|
349 |
+
"lstrip": false,
|
350 |
+
"normalized": false,
|
351 |
+
"rstrip": false,
|
352 |
+
"single_word": false,
|
353 |
+
"special": true
|
354 |
+
},
|
355 |
+
"128044": {
|
356 |
+
"content": "<|reserved_special_token_36|>",
|
357 |
+
"lstrip": false,
|
358 |
+
"normalized": false,
|
359 |
+
"rstrip": false,
|
360 |
+
"single_word": false,
|
361 |
+
"special": true
|
362 |
+
},
|
363 |
+
"128045": {
|
364 |
+
"content": "<|reserved_special_token_37|>",
|
365 |
+
"lstrip": false,
|
366 |
+
"normalized": false,
|
367 |
+
"rstrip": false,
|
368 |
+
"single_word": false,
|
369 |
+
"special": true
|
370 |
+
},
|
371 |
+
"128046": {
|
372 |
+
"content": "<|reserved_special_token_38|>",
|
373 |
+
"lstrip": false,
|
374 |
+
"normalized": false,
|
375 |
+
"rstrip": false,
|
376 |
+
"single_word": false,
|
377 |
+
"special": true
|
378 |
+
},
|
379 |
+
"128047": {
|
380 |
+
"content": "<|reserved_special_token_39|>",
|
381 |
+
"lstrip": false,
|
382 |
+
"normalized": false,
|
383 |
+
"rstrip": false,
|
384 |
+
"single_word": false,
|
385 |
+
"special": true
|
386 |
+
},
|
387 |
+
"128048": {
|
388 |
+
"content": "<|reserved_special_token_40|>",
|
389 |
+
"lstrip": false,
|
390 |
+
"normalized": false,
|
391 |
+
"rstrip": false,
|
392 |
+
"single_word": false,
|
393 |
+
"special": true
|
394 |
+
},
|
395 |
+
"128049": {
|
396 |
+
"content": "<|reserved_special_token_41|>",
|
397 |
+
"lstrip": false,
|
398 |
+
"normalized": false,
|
399 |
+
"rstrip": false,
|
400 |
+
"single_word": false,
|
401 |
+
"special": true
|
402 |
+
},
|
403 |
+
"128050": {
|
404 |
+
"content": "<|reserved_special_token_42|>",
|
405 |
+
"lstrip": false,
|
406 |
+
"normalized": false,
|
407 |
+
"rstrip": false,
|
408 |
+
"single_word": false,
|
409 |
+
"special": true
|
410 |
+
},
|
411 |
+
"128051": {
|
412 |
+
"content": "<|reserved_special_token_43|>",
|
413 |
+
"lstrip": false,
|
414 |
+
"normalized": false,
|
415 |
+
"rstrip": false,
|
416 |
+
"single_word": false,
|
417 |
+
"special": true
|
418 |
+
},
|
419 |
+
"128052": {
|
420 |
+
"content": "<|reserved_special_token_44|>",
|
421 |
+
"lstrip": false,
|
422 |
+
"normalized": false,
|
423 |
+
"rstrip": false,
|
424 |
+
"single_word": false,
|
425 |
+
"special": true
|
426 |
+
},
|
427 |
+
"128053": {
|
428 |
+
"content": "<|reserved_special_token_45|>",
|
429 |
+
"lstrip": false,
|
430 |
+
"normalized": false,
|
431 |
+
"rstrip": false,
|
432 |
+
"single_word": false,
|
433 |
+
"special": true
|
434 |
+
},
|
435 |
+
"128054": {
|
436 |
+
"content": "<|reserved_special_token_46|>",
|
437 |
+
"lstrip": false,
|
438 |
+
"normalized": false,
|
439 |
+
"rstrip": false,
|
440 |
+
"single_word": false,
|
441 |
+
"special": true
|
442 |
+
},
|
443 |
+
"128055": {
|
444 |
+
"content": "<|reserved_special_token_47|>",
|
445 |
+
"lstrip": false,
|
446 |
+
"normalized": false,
|
447 |
+
"rstrip": false,
|
448 |
+
"single_word": false,
|
449 |
+
"special": true
|
450 |
+
},
|
451 |
+
"128056": {
|
452 |
+
"content": "<|reserved_special_token_48|>",
|
453 |
+
"lstrip": false,
|
454 |
+
"normalized": false,
|
455 |
+
"rstrip": false,
|
456 |
+
"single_word": false,
|
457 |
+
"special": true
|
458 |
+
},
|
459 |
+
"128057": {
|
460 |
+
"content": "<|reserved_special_token_49|>",
|
461 |
+
"lstrip": false,
|
462 |
+
"normalized": false,
|
463 |
+
"rstrip": false,
|
464 |
+
"single_word": false,
|
465 |
+
"special": true
|
466 |
+
},
|
467 |
+
"128058": {
|
468 |
+
"content": "<|reserved_special_token_50|>",
|
469 |
+
"lstrip": false,
|
470 |
+
"normalized": false,
|
471 |
+
"rstrip": false,
|
472 |
+
"single_word": false,
|
473 |
+
"special": true
|
474 |
+
},
|
475 |
+
"128059": {
|
476 |
+
"content": "<|reserved_special_token_51|>",
|
477 |
+
"lstrip": false,
|
478 |
+
"normalized": false,
|
479 |
+
"rstrip": false,
|
480 |
+
"single_word": false,
|
481 |
+
"special": true
|
482 |
+
},
|
483 |
+
"128060": {
|
484 |
+
"content": "<|reserved_special_token_52|>",
|
485 |
+
"lstrip": false,
|
486 |
+
"normalized": false,
|
487 |
+
"rstrip": false,
|
488 |
+
"single_word": false,
|
489 |
+
"special": true
|
490 |
+
},
|
491 |
+
"128061": {
|
492 |
+
"content": "<|reserved_special_token_53|>",
|
493 |
+
"lstrip": false,
|
494 |
+
"normalized": false,
|
495 |
+
"rstrip": false,
|
496 |
+
"single_word": false,
|
497 |
+
"special": true
|
498 |
+
},
|
499 |
+
"128062": {
|
500 |
+
"content": "<|reserved_special_token_54|>",
|
501 |
+
"lstrip": false,
|
502 |
+
"normalized": false,
|
503 |
+
"rstrip": false,
|
504 |
+
"single_word": false,
|
505 |
+
"special": true
|
506 |
+
},
|
507 |
+
"128063": {
|
508 |
+
"content": "<|reserved_special_token_55|>",
|
509 |
+
"lstrip": false,
|
510 |
+
"normalized": false,
|
511 |
+
"rstrip": false,
|
512 |
+
"single_word": false,
|
513 |
+
"special": true
|
514 |
+
},
|
515 |
+
"128064": {
|
516 |
+
"content": "<|reserved_special_token_56|>",
|
517 |
+
"lstrip": false,
|
518 |
+
"normalized": false,
|
519 |
+
"rstrip": false,
|
520 |
+
"single_word": false,
|
521 |
+
"special": true
|
522 |
+
},
|
523 |
+
"128065": {
|
524 |
+
"content": "<|reserved_special_token_57|>",
|
525 |
+
"lstrip": false,
|
526 |
+
"normalized": false,
|
527 |
+
"rstrip": false,
|
528 |
+
"single_word": false,
|
529 |
+
"special": true
|
530 |
+
},
|
531 |
+
"128066": {
|
532 |
+
"content": "<|reserved_special_token_58|>",
|
533 |
+
"lstrip": false,
|
534 |
+
"normalized": false,
|
535 |
+
"rstrip": false,
|
536 |
+
"single_word": false,
|
537 |
+
"special": true
|
538 |
+
},
|
539 |
+
"128067": {
|
540 |
+
"content": "<|reserved_special_token_59|>",
|
541 |
+
"lstrip": false,
|
542 |
+
"normalized": false,
|
543 |
+
"rstrip": false,
|
544 |
+
"single_word": false,
|
545 |
+
"special": true
|
546 |
+
},
|
547 |
+
"128068": {
|
548 |
+
"content": "<|reserved_special_token_60|>",
|
549 |
+
"lstrip": false,
|
550 |
+
"normalized": false,
|
551 |
+
"rstrip": false,
|
552 |
+
"single_word": false,
|
553 |
+
"special": true
|
554 |
+
},
|
555 |
+
"128069": {
|
556 |
+
"content": "<|reserved_special_token_61|>",
|
557 |
+
"lstrip": false,
|
558 |
+
"normalized": false,
|
559 |
+
"rstrip": false,
|
560 |
+
"single_word": false,
|
561 |
+
"special": true
|
562 |
+
},
|
563 |
+
"128070": {
|
564 |
+
"content": "<|reserved_special_token_62|>",
|
565 |
+
"lstrip": false,
|
566 |
+
"normalized": false,
|
567 |
+
"rstrip": false,
|
568 |
+
"single_word": false,
|
569 |
+
"special": true
|
570 |
+
},
|
571 |
+
"128071": {
|
572 |
+
"content": "<|reserved_special_token_63|>",
|
573 |
+
"lstrip": false,
|
574 |
+
"normalized": false,
|
575 |
+
"rstrip": false,
|
576 |
+
"single_word": false,
|
577 |
+
"special": true
|
578 |
+
},
|
579 |
+
"128072": {
|
580 |
+
"content": "<|reserved_special_token_64|>",
|
581 |
+
"lstrip": false,
|
582 |
+
"normalized": false,
|
583 |
+
"rstrip": false,
|
584 |
+
"single_word": false,
|
585 |
+
"special": true
|
586 |
+
},
|
587 |
+
"128073": {
|
588 |
+
"content": "<|reserved_special_token_65|>",
|
589 |
+
"lstrip": false,
|
590 |
+
"normalized": false,
|
591 |
+
"rstrip": false,
|
592 |
+
"single_word": false,
|
593 |
+
"special": true
|
594 |
+
},
|
595 |
+
"128074": {
|
596 |
+
"content": "<|reserved_special_token_66|>",
|
597 |
+
"lstrip": false,
|
598 |
+
"normalized": false,
|
599 |
+
"rstrip": false,
|
600 |
+
"single_word": false,
|
601 |
+
"special": true
|
602 |
+
},
|
603 |
+
"128075": {
|
604 |
+
"content": "<|reserved_special_token_67|>",
|
605 |
+
"lstrip": false,
|
606 |
+
"normalized": false,
|
607 |
+
"rstrip": false,
|
608 |
+
"single_word": false,
|
609 |
+
"special": true
|
610 |
+
},
|
611 |
+
"128076": {
|
612 |
+
"content": "<|reserved_special_token_68|>",
|
613 |
+
"lstrip": false,
|
614 |
+
"normalized": false,
|
615 |
+
"rstrip": false,
|
616 |
+
"single_word": false,
|
617 |
+
"special": true
|
618 |
+
},
|
619 |
+
"128077": {
|
620 |
+
"content": "<|reserved_special_token_69|>",
|
621 |
+
"lstrip": false,
|
622 |
+
"normalized": false,
|
623 |
+
"rstrip": false,
|
624 |
+
"single_word": false,
|
625 |
+
"special": true
|
626 |
+
},
|
627 |
+
"128078": {
|
628 |
+
"content": "<|reserved_special_token_70|>",
|
629 |
+
"lstrip": false,
|
630 |
+
"normalized": false,
|
631 |
+
"rstrip": false,
|
632 |
+
"single_word": false,
|
633 |
+
"special": true
|
634 |
+
},
|
635 |
+
"128079": {
|
636 |
+
"content": "<|reserved_special_token_71|>",
|
637 |
+
"lstrip": false,
|
638 |
+
"normalized": false,
|
639 |
+
"rstrip": false,
|
640 |
+
"single_word": false,
|
641 |
+
"special": true
|
642 |
+
},
|
643 |
+
"128080": {
|
644 |
+
"content": "<|reserved_special_token_72|>",
|
645 |
+
"lstrip": false,
|
646 |
+
"normalized": false,
|
647 |
+
"rstrip": false,
|
648 |
+
"single_word": false,
|
649 |
+
"special": true
|
650 |
+
},
|
651 |
+
"128081": {
|
652 |
+
"content": "<|reserved_special_token_73|>",
|
653 |
+
"lstrip": false,
|
654 |
+
"normalized": false,
|
655 |
+
"rstrip": false,
|
656 |
+
"single_word": false,
|
657 |
+
"special": true
|
658 |
+
},
|
659 |
+
"128082": {
|
660 |
+
"content": "<|reserved_special_token_74|>",
|
661 |
+
"lstrip": false,
|
662 |
+
"normalized": false,
|
663 |
+
"rstrip": false,
|
664 |
+
"single_word": false,
|
665 |
+
"special": true
|
666 |
+
},
|
667 |
+
"128083": {
|
668 |
+
"content": "<|reserved_special_token_75|>",
|
669 |
+
"lstrip": false,
|
670 |
+
"normalized": false,
|
671 |
+
"rstrip": false,
|
672 |
+
"single_word": false,
|
673 |
+
"special": true
|
674 |
+
},
|
675 |
+
"128084": {
|
676 |
+
"content": "<|reserved_special_token_76|>",
|
677 |
+
"lstrip": false,
|
678 |
+
"normalized": false,
|
679 |
+
"rstrip": false,
|
680 |
+
"single_word": false,
|
681 |
+
"special": true
|
682 |
+
},
|
683 |
+
"128085": {
|
684 |
+
"content": "<|reserved_special_token_77|>",
|
685 |
+
"lstrip": false,
|
686 |
+
"normalized": false,
|
687 |
+
"rstrip": false,
|
688 |
+
"single_word": false,
|
689 |
+
"special": true
|
690 |
+
},
|
691 |
+
"128086": {
|
692 |
+
"content": "<|reserved_special_token_78|>",
|
693 |
+
"lstrip": false,
|
694 |
+
"normalized": false,
|
695 |
+
"rstrip": false,
|
696 |
+
"single_word": false,
|
697 |
+
"special": true
|
698 |
+
},
|
699 |
+
"128087": {
|
700 |
+
"content": "<|reserved_special_token_79|>",
|
701 |
+
"lstrip": false,
|
702 |
+
"normalized": false,
|
703 |
+
"rstrip": false,
|
704 |
+
"single_word": false,
|
705 |
+
"special": true
|
706 |
+
},
|
707 |
+
"128088": {
|
708 |
+
"content": "<|reserved_special_token_80|>",
|
709 |
+
"lstrip": false,
|
710 |
+
"normalized": false,
|
711 |
+
"rstrip": false,
|
712 |
+
"single_word": false,
|
713 |
+
"special": true
|
714 |
+
},
|
715 |
+
"128089": {
|
716 |
+
"content": "<|reserved_special_token_81|>",
|
717 |
+
"lstrip": false,
|
718 |
+
"normalized": false,
|
719 |
+
"rstrip": false,
|
720 |
+
"single_word": false,
|
721 |
+
"special": true
|
722 |
+
},
|
723 |
+
"128090": {
|
724 |
+
"content": "<|reserved_special_token_82|>",
|
725 |
+
"lstrip": false,
|
726 |
+
"normalized": false,
|
727 |
+
"rstrip": false,
|
728 |
+
"single_word": false,
|
729 |
+
"special": true
|
730 |
+
},
|
731 |
+
"128091": {
|
732 |
+
"content": "<|reserved_special_token_83|>",
|
733 |
+
"lstrip": false,
|
734 |
+
"normalized": false,
|
735 |
+
"rstrip": false,
|
736 |
+
"single_word": false,
|
737 |
+
"special": true
|
738 |
+
},
|
739 |
+
"128092": {
|
740 |
+
"content": "<|reserved_special_token_84|>",
|
741 |
+
"lstrip": false,
|
742 |
+
"normalized": false,
|
743 |
+
"rstrip": false,
|
744 |
+
"single_word": false,
|
745 |
+
"special": true
|
746 |
+
},
|
747 |
+
"128093": {
|
748 |
+
"content": "<|reserved_special_token_85|>",
|
749 |
+
"lstrip": false,
|
750 |
+
"normalized": false,
|
751 |
+
"rstrip": false,
|
752 |
+
"single_word": false,
|
753 |
+
"special": true
|
754 |
+
},
|
755 |
+
"128094": {
|
756 |
+
"content": "<|reserved_special_token_86|>",
|
757 |
+
"lstrip": false,
|
758 |
+
"normalized": false,
|
759 |
+
"rstrip": false,
|
760 |
+
"single_word": false,
|
761 |
+
"special": true
|
762 |
+
},
|
763 |
+
"128095": {
|
764 |
+
"content": "<|reserved_special_token_87|>",
|
765 |
+
"lstrip": false,
|
766 |
+
"normalized": false,
|
767 |
+
"rstrip": false,
|
768 |
+
"single_word": false,
|
769 |
+
"special": true
|
770 |
+
},
|
771 |
+
"128096": {
|
772 |
+
"content": "<|reserved_special_token_88|>",
|
773 |
+
"lstrip": false,
|
774 |
+
"normalized": false,
|
775 |
+
"rstrip": false,
|
776 |
+
"single_word": false,
|
777 |
+
"special": true
|
778 |
+
},
|
779 |
+
"128097": {
|
780 |
+
"content": "<|reserved_special_token_89|>",
|
781 |
+
"lstrip": false,
|
782 |
+
"normalized": false,
|
783 |
+
"rstrip": false,
|
784 |
+
"single_word": false,
|
785 |
+
"special": true
|
786 |
+
},
|
787 |
+
"128098": {
|
788 |
+
"content": "<|reserved_special_token_90|>",
|
789 |
+
"lstrip": false,
|
790 |
+
"normalized": false,
|
791 |
+
"rstrip": false,
|
792 |
+
"single_word": false,
|
793 |
+
"special": true
|
794 |
+
},
|
795 |
+
"128099": {
|
796 |
+
"content": "<|reserved_special_token_91|>",
|
797 |
+
"lstrip": false,
|
798 |
+
"normalized": false,
|
799 |
+
"rstrip": false,
|
800 |
+
"single_word": false,
|
801 |
+
"special": true
|
802 |
+
},
|
803 |
+
"128100": {
|
804 |
+
"content": "<|reserved_special_token_92|>",
|
805 |
+
"lstrip": false,
|
806 |
+
"normalized": false,
|
807 |
+
"rstrip": false,
|
808 |
+
"single_word": false,
|
809 |
+
"special": true
|
810 |
+
},
|
811 |
+
"128101": {
|
812 |
+
"content": "<|reserved_special_token_93|>",
|
813 |
+
"lstrip": false,
|
814 |
+
"normalized": false,
|
815 |
+
"rstrip": false,
|
816 |
+
"single_word": false,
|
817 |
+
"special": true
|
818 |
+
},
|
819 |
+
"128102": {
|
820 |
+
"content": "<|reserved_special_token_94|>",
|
821 |
+
"lstrip": false,
|
822 |
+
"normalized": false,
|
823 |
+
"rstrip": false,
|
824 |
+
"single_word": false,
|
825 |
+
"special": true
|
826 |
+
},
|
827 |
+
"128103": {
|
828 |
+
"content": "<|reserved_special_token_95|>",
|
829 |
+
"lstrip": false,
|
830 |
+
"normalized": false,
|
831 |
+
"rstrip": false,
|
832 |
+
"single_word": false,
|
833 |
+
"special": true
|
834 |
+
},
|
835 |
+
"128104": {
|
836 |
+
"content": "<|reserved_special_token_96|>",
|
837 |
+
"lstrip": false,
|
838 |
+
"normalized": false,
|
839 |
+
"rstrip": false,
|
840 |
+
"single_word": false,
|
841 |
+
"special": true
|
842 |
+
},
|
843 |
+
"128105": {
|
844 |
+
"content": "<|reserved_special_token_97|>",
|
845 |
+
"lstrip": false,
|
846 |
+
"normalized": false,
|
847 |
+
"rstrip": false,
|
848 |
+
"single_word": false,
|
849 |
+
"special": true
|
850 |
+
},
|
851 |
+
"128106": {
|
852 |
+
"content": "<|reserved_special_token_98|>",
|
853 |
+
"lstrip": false,
|
854 |
+
"normalized": false,
|
855 |
+
"rstrip": false,
|
856 |
+
"single_word": false,
|
857 |
+
"special": true
|
858 |
+
},
|
859 |
+
"128107": {
|
860 |
+
"content": "<|reserved_special_token_99|>",
|
861 |
+
"lstrip": false,
|
862 |
+
"normalized": false,
|
863 |
+
"rstrip": false,
|
864 |
+
"single_word": false,
|
865 |
+
"special": true
|
866 |
+
},
|
867 |
+
"128108": {
|
868 |
+
"content": "<|reserved_special_token_100|>",
|
869 |
+
"lstrip": false,
|
870 |
+
"normalized": false,
|
871 |
+
"rstrip": false,
|
872 |
+
"single_word": false,
|
873 |
+
"special": true
|
874 |
+
},
|
875 |
+
"128109": {
|
876 |
+
"content": "<|reserved_special_token_101|>",
|
877 |
+
"lstrip": false,
|
878 |
+
"normalized": false,
|
879 |
+
"rstrip": false,
|
880 |
+
"single_word": false,
|
881 |
+
"special": true
|
882 |
+
},
|
883 |
+
"128110": {
|
884 |
+
"content": "<|reserved_special_token_102|>",
|
885 |
+
"lstrip": false,
|
886 |
+
"normalized": false,
|
887 |
+
"rstrip": false,
|
888 |
+
"single_word": false,
|
889 |
+
"special": true
|
890 |
+
},
|
891 |
+
"128111": {
|
892 |
+
"content": "<|reserved_special_token_103|>",
|
893 |
+
"lstrip": false,
|
894 |
+
"normalized": false,
|
895 |
+
"rstrip": false,
|
896 |
+
"single_word": false,
|
897 |
+
"special": true
|
898 |
+
},
|
899 |
+
"128112": {
|
900 |
+
"content": "<|reserved_special_token_104|>",
|
901 |
+
"lstrip": false,
|
902 |
+
"normalized": false,
|
903 |
+
"rstrip": false,
|
904 |
+
"single_word": false,
|
905 |
+
"special": true
|
906 |
+
},
|
907 |
+
"128113": {
|
908 |
+
"content": "<|reserved_special_token_105|>",
|
909 |
+
"lstrip": false,
|
910 |
+
"normalized": false,
|
911 |
+
"rstrip": false,
|
912 |
+
"single_word": false,
|
913 |
+
"special": true
|
914 |
+
},
|
915 |
+
"128114": {
|
916 |
+
"content": "<|reserved_special_token_106|>",
|
917 |
+
"lstrip": false,
|
918 |
+
"normalized": false,
|
919 |
+
"rstrip": false,
|
920 |
+
"single_word": false,
|
921 |
+
"special": true
|
922 |
+
},
|
923 |
+
"128115": {
|
924 |
+
"content": "<|reserved_special_token_107|>",
|
925 |
+
"lstrip": false,
|
926 |
+
"normalized": false,
|
927 |
+
"rstrip": false,
|
928 |
+
"single_word": false,
|
929 |
+
"special": true
|
930 |
+
},
|
931 |
+
"128116": {
|
932 |
+
"content": "<|reserved_special_token_108|>",
|
933 |
+
"lstrip": false,
|
934 |
+
"normalized": false,
|
935 |
+
"rstrip": false,
|
936 |
+
"single_word": false,
|
937 |
+
"special": true
|
938 |
+
},
|
939 |
+
"128117": {
|
940 |
+
"content": "<|reserved_special_token_109|>",
|
941 |
+
"lstrip": false,
|
942 |
+
"normalized": false,
|
943 |
+
"rstrip": false,
|
944 |
+
"single_word": false,
|
945 |
+
"special": true
|
946 |
+
},
|
947 |
+
"128118": {
|
948 |
+
"content": "<|reserved_special_token_110|>",
|
949 |
+
"lstrip": false,
|
950 |
+
"normalized": false,
|
951 |
+
"rstrip": false,
|
952 |
+
"single_word": false,
|
953 |
+
"special": true
|
954 |
+
},
|
955 |
+
"128119": {
|
956 |
+
"content": "<|reserved_special_token_111|>",
|
957 |
+
"lstrip": false,
|
958 |
+
"normalized": false,
|
959 |
+
"rstrip": false,
|
960 |
+
"single_word": false,
|
961 |
+
"special": true
|
962 |
+
},
|
963 |
+
"128120": {
|
964 |
+
"content": "<|reserved_special_token_112|>",
|
965 |
+
"lstrip": false,
|
966 |
+
"normalized": false,
|
967 |
+
"rstrip": false,
|
968 |
+
"single_word": false,
|
969 |
+
"special": true
|
970 |
+
},
|
971 |
+
"128121": {
|
972 |
+
"content": "<|reserved_special_token_113|>",
|
973 |
+
"lstrip": false,
|
974 |
+
"normalized": false,
|
975 |
+
"rstrip": false,
|
976 |
+
"single_word": false,
|
977 |
+
"special": true
|
978 |
+
},
|
979 |
+
"128122": {
|
980 |
+
"content": "<|reserved_special_token_114|>",
|
981 |
+
"lstrip": false,
|
982 |
+
"normalized": false,
|
983 |
+
"rstrip": false,
|
984 |
+
"single_word": false,
|
985 |
+
"special": true
|
986 |
+
},
|
987 |
+
"128123": {
|
988 |
+
"content": "<|reserved_special_token_115|>",
|
989 |
+
"lstrip": false,
|
990 |
+
"normalized": false,
|
991 |
+
"rstrip": false,
|
992 |
+
"single_word": false,
|
993 |
+
"special": true
|
994 |
+
},
|
995 |
+
"128124": {
|
996 |
+
"content": "<|reserved_special_token_116|>",
|
997 |
+
"lstrip": false,
|
998 |
+
"normalized": false,
|
999 |
+
"rstrip": false,
|
1000 |
+
"single_word": false,
|
1001 |
+
"special": true
|
1002 |
+
},
|
1003 |
+
"128125": {
|
1004 |
+
"content": "<|reserved_special_token_117|>",
|
1005 |
+
"lstrip": false,
|
1006 |
+
"normalized": false,
|
1007 |
+
"rstrip": false,
|
1008 |
+
"single_word": false,
|
1009 |
+
"special": true
|
1010 |
+
},
|
1011 |
+
"128126": {
|
1012 |
+
"content": "<|reserved_special_token_118|>",
|
1013 |
+
"lstrip": false,
|
1014 |
+
"normalized": false,
|
1015 |
+
"rstrip": false,
|
1016 |
+
"single_word": false,
|
1017 |
+
"special": true
|
1018 |
+
},
|
1019 |
+
"128127": {
|
1020 |
+
"content": "<|reserved_special_token_119|>",
|
1021 |
+
"lstrip": false,
|
1022 |
+
"normalized": false,
|
1023 |
+
"rstrip": false,
|
1024 |
+
"single_word": false,
|
1025 |
+
"special": true
|
1026 |
+
},
|
1027 |
+
"128128": {
|
1028 |
+
"content": "<|reserved_special_token_120|>",
|
1029 |
+
"lstrip": false,
|
1030 |
+
"normalized": false,
|
1031 |
+
"rstrip": false,
|
1032 |
+
"single_word": false,
|
1033 |
+
"special": true
|
1034 |
+
},
|
1035 |
+
"128129": {
|
1036 |
+
"content": "<|reserved_special_token_121|>",
|
1037 |
+
"lstrip": false,
|
1038 |
+
"normalized": false,
|
1039 |
+
"rstrip": false,
|
1040 |
+
"single_word": false,
|
1041 |
+
"special": true
|
1042 |
+
},
|
1043 |
+
"128130": {
|
1044 |
+
"content": "<|reserved_special_token_122|>",
|
1045 |
+
"lstrip": false,
|
1046 |
+
"normalized": false,
|
1047 |
+
"rstrip": false,
|
1048 |
+
"single_word": false,
|
1049 |
+
"special": true
|
1050 |
+
},
|
1051 |
+
"128131": {
|
1052 |
+
"content": "<|reserved_special_token_123|>",
|
1053 |
+
"lstrip": false,
|
1054 |
+
"normalized": false,
|
1055 |
+
"rstrip": false,
|
1056 |
+
"single_word": false,
|
1057 |
+
"special": true
|
1058 |
+
},
|
1059 |
+
"128132": {
|
1060 |
+
"content": "<|reserved_special_token_124|>",
|
1061 |
+
"lstrip": false,
|
1062 |
+
"normalized": false,
|
1063 |
+
"rstrip": false,
|
1064 |
+
"single_word": false,
|
1065 |
+
"special": true
|
1066 |
+
},
|
1067 |
+
"128133": {
|
1068 |
+
"content": "<|reserved_special_token_125|>",
|
1069 |
+
"lstrip": false,
|
1070 |
+
"normalized": false,
|
1071 |
+
"rstrip": false,
|
1072 |
+
"single_word": false,
|
1073 |
+
"special": true
|
1074 |
+
},
|
1075 |
+
"128134": {
|
1076 |
+
"content": "<|reserved_special_token_126|>",
|
1077 |
+
"lstrip": false,
|
1078 |
+
"normalized": false,
|
1079 |
+
"rstrip": false,
|
1080 |
+
"single_word": false,
|
1081 |
+
"special": true
|
1082 |
+
},
|
1083 |
+
"128135": {
|
1084 |
+
"content": "<|reserved_special_token_127|>",
|
1085 |
+
"lstrip": false,
|
1086 |
+
"normalized": false,
|
1087 |
+
"rstrip": false,
|
1088 |
+
"single_word": false,
|
1089 |
+
"special": true
|
1090 |
+
},
|
1091 |
+
"128136": {
|
1092 |
+
"content": "<|reserved_special_token_128|>",
|
1093 |
+
"lstrip": false,
|
1094 |
+
"normalized": false,
|
1095 |
+
"rstrip": false,
|
1096 |
+
"single_word": false,
|
1097 |
+
"special": true
|
1098 |
+
},
|
1099 |
+
"128137": {
|
1100 |
+
"content": "<|reserved_special_token_129|>",
|
1101 |
+
"lstrip": false,
|
1102 |
+
"normalized": false,
|
1103 |
+
"rstrip": false,
|
1104 |
+
"single_word": false,
|
1105 |
+
"special": true
|
1106 |
+
},
|
1107 |
+
"128138": {
|
1108 |
+
"content": "<|reserved_special_token_130|>",
|
1109 |
+
"lstrip": false,
|
1110 |
+
"normalized": false,
|
1111 |
+
"rstrip": false,
|
1112 |
+
"single_word": false,
|
1113 |
+
"special": true
|
1114 |
+
},
|
1115 |
+
"128139": {
|
1116 |
+
"content": "<|reserved_special_token_131|>",
|
1117 |
+
"lstrip": false,
|
1118 |
+
"normalized": false,
|
1119 |
+
"rstrip": false,
|
1120 |
+
"single_word": false,
|
1121 |
+
"special": true
|
1122 |
+
},
|
1123 |
+
"128140": {
|
1124 |
+
"content": "<|reserved_special_token_132|>",
|
1125 |
+
"lstrip": false,
|
1126 |
+
"normalized": false,
|
1127 |
+
"rstrip": false,
|
1128 |
+
"single_word": false,
|
1129 |
+
"special": true
|
1130 |
+
},
|
1131 |
+
"128141": {
|
1132 |
+
"content": "<|reserved_special_token_133|>",
|
1133 |
+
"lstrip": false,
|
1134 |
+
"normalized": false,
|
1135 |
+
"rstrip": false,
|
1136 |
+
"single_word": false,
|
1137 |
+
"special": true
|
1138 |
+
},
|
1139 |
+
"128142": {
|
1140 |
+
"content": "<|reserved_special_token_134|>",
|
1141 |
+
"lstrip": false,
|
1142 |
+
"normalized": false,
|
1143 |
+
"rstrip": false,
|
1144 |
+
"single_word": false,
|
1145 |
+
"special": true
|
1146 |
+
},
|
1147 |
+
"128143": {
|
1148 |
+
"content": "<|reserved_special_token_135|>",
|
1149 |
+
"lstrip": false,
|
1150 |
+
"normalized": false,
|
1151 |
+
"rstrip": false,
|
1152 |
+
"single_word": false,
|
1153 |
+
"special": true
|
1154 |
+
},
|
1155 |
+
"128144": {
|
1156 |
+
"content": "<|reserved_special_token_136|>",
|
1157 |
+
"lstrip": false,
|
1158 |
+
"normalized": false,
|
1159 |
+
"rstrip": false,
|
1160 |
+
"single_word": false,
|
1161 |
+
"special": true
|
1162 |
+
},
|
1163 |
+
"128145": {
|
1164 |
+
"content": "<|reserved_special_token_137|>",
|
1165 |
+
"lstrip": false,
|
1166 |
+
"normalized": false,
|
1167 |
+
"rstrip": false,
|
1168 |
+
"single_word": false,
|
1169 |
+
"special": true
|
1170 |
+
},
|
1171 |
+
"128146": {
|
1172 |
+
"content": "<|reserved_special_token_138|>",
|
1173 |
+
"lstrip": false,
|
1174 |
+
"normalized": false,
|
1175 |
+
"rstrip": false,
|
1176 |
+
"single_word": false,
|
1177 |
+
"special": true
|
1178 |
+
},
|
1179 |
+
"128147": {
|
1180 |
+
"content": "<|reserved_special_token_139|>",
|
1181 |
+
"lstrip": false,
|
1182 |
+
"normalized": false,
|
1183 |
+
"rstrip": false,
|
1184 |
+
"single_word": false,
|
1185 |
+
"special": true
|
1186 |
+
},
|
1187 |
+
"128148": {
|
1188 |
+
"content": "<|reserved_special_token_140|>",
|
1189 |
+
"lstrip": false,
|
1190 |
+
"normalized": false,
|
1191 |
+
"rstrip": false,
|
1192 |
+
"single_word": false,
|
1193 |
+
"special": true
|
1194 |
+
},
|
1195 |
+
"128149": {
|
1196 |
+
"content": "<|reserved_special_token_141|>",
|
1197 |
+
"lstrip": false,
|
1198 |
+
"normalized": false,
|
1199 |
+
"rstrip": false,
|
1200 |
+
"single_word": false,
|
1201 |
+
"special": true
|
1202 |
+
},
|
1203 |
+
"128150": {
|
1204 |
+
"content": "<|reserved_special_token_142|>",
|
1205 |
+
"lstrip": false,
|
1206 |
+
"normalized": false,
|
1207 |
+
"rstrip": false,
|
1208 |
+
"single_word": false,
|
1209 |
+
"special": true
|
1210 |
+
},
|
1211 |
+
"128151": {
|
1212 |
+
"content": "<|reserved_special_token_143|>",
|
1213 |
+
"lstrip": false,
|
1214 |
+
"normalized": false,
|
1215 |
+
"rstrip": false,
|
1216 |
+
"single_word": false,
|
1217 |
+
"special": true
|
1218 |
+
},
|
1219 |
+
"128152": {
|
1220 |
+
"content": "<|reserved_special_token_144|>",
|
1221 |
+
"lstrip": false,
|
1222 |
+
"normalized": false,
|
1223 |
+
"rstrip": false,
|
1224 |
+
"single_word": false,
|
1225 |
+
"special": true
|
1226 |
+
},
|
1227 |
+
"128153": {
|
1228 |
+
"content": "<|reserved_special_token_145|>",
|
1229 |
+
"lstrip": false,
|
1230 |
+
"normalized": false,
|
1231 |
+
"rstrip": false,
|
1232 |
+
"single_word": false,
|
1233 |
+
"special": true
|
1234 |
+
},
|
1235 |
+
"128154": {
|
1236 |
+
"content": "<|reserved_special_token_146|>",
|
1237 |
+
"lstrip": false,
|
1238 |
+
"normalized": false,
|
1239 |
+
"rstrip": false,
|
1240 |
+
"single_word": false,
|
1241 |
+
"special": true
|
1242 |
+
},
|
1243 |
+
"128155": {
|
1244 |
+
"content": "<|reserved_special_token_147|>",
|
1245 |
+
"lstrip": false,
|
1246 |
+
"normalized": false,
|
1247 |
+
"rstrip": false,
|
1248 |
+
"single_word": false,
|
1249 |
+
"special": true
|
1250 |
+
},
|
1251 |
+
"128156": {
|
1252 |
+
"content": "<|reserved_special_token_148|>",
|
1253 |
+
"lstrip": false,
|
1254 |
+
"normalized": false,
|
1255 |
+
"rstrip": false,
|
1256 |
+
"single_word": false,
|
1257 |
+
"special": true
|
1258 |
+
},
|
1259 |
+
"128157": {
|
1260 |
+
"content": "<|reserved_special_token_149|>",
|
1261 |
+
"lstrip": false,
|
1262 |
+
"normalized": false,
|
1263 |
+
"rstrip": false,
|
1264 |
+
"single_word": false,
|
1265 |
+
"special": true
|
1266 |
+
},
|
1267 |
+
"128158": {
|
1268 |
+
"content": "<|reserved_special_token_150|>",
|
1269 |
+
"lstrip": false,
|
1270 |
+
"normalized": false,
|
1271 |
+
"rstrip": false,
|
1272 |
+
"single_word": false,
|
1273 |
+
"special": true
|
1274 |
+
},
|
1275 |
+
"128159": {
|
1276 |
+
"content": "<|reserved_special_token_151|>",
|
1277 |
+
"lstrip": false,
|
1278 |
+
"normalized": false,
|
1279 |
+
"rstrip": false,
|
1280 |
+
"single_word": false,
|
1281 |
+
"special": true
|
1282 |
+
},
|
1283 |
+
"128160": {
|
1284 |
+
"content": "<|reserved_special_token_152|>",
|
1285 |
+
"lstrip": false,
|
1286 |
+
"normalized": false,
|
1287 |
+
"rstrip": false,
|
1288 |
+
"single_word": false,
|
1289 |
+
"special": true
|
1290 |
+
},
|
1291 |
+
"128161": {
|
1292 |
+
"content": "<|reserved_special_token_153|>",
|
1293 |
+
"lstrip": false,
|
1294 |
+
"normalized": false,
|
1295 |
+
"rstrip": false,
|
1296 |
+
"single_word": false,
|
1297 |
+
"special": true
|
1298 |
+
},
|
1299 |
+
"128162": {
|
1300 |
+
"content": "<|reserved_special_token_154|>",
|
1301 |
+
"lstrip": false,
|
1302 |
+
"normalized": false,
|
1303 |
+
"rstrip": false,
|
1304 |
+
"single_word": false,
|
1305 |
+
"special": true
|
1306 |
+
},
|
1307 |
+
"128163": {
|
1308 |
+
"content": "<|reserved_special_token_155|>",
|
1309 |
+
"lstrip": false,
|
1310 |
+
"normalized": false,
|
1311 |
+
"rstrip": false,
|
1312 |
+
"single_word": false,
|
1313 |
+
"special": true
|
1314 |
+
},
|
1315 |
+
"128164": {
|
1316 |
+
"content": "<|reserved_special_token_156|>",
|
1317 |
+
"lstrip": false,
|
1318 |
+
"normalized": false,
|
1319 |
+
"rstrip": false,
|
1320 |
+
"single_word": false,
|
1321 |
+
"special": true
|
1322 |
+
},
|
1323 |
+
"128165": {
|
1324 |
+
"content": "<|reserved_special_token_157|>",
|
1325 |
+
"lstrip": false,
|
1326 |
+
"normalized": false,
|
1327 |
+
"rstrip": false,
|
1328 |
+
"single_word": false,
|
1329 |
+
"special": true
|
1330 |
+
},
|
1331 |
+
"128166": {
|
1332 |
+
"content": "<|reserved_special_token_158|>",
|
1333 |
+
"lstrip": false,
|
1334 |
+
"normalized": false,
|
1335 |
+
"rstrip": false,
|
1336 |
+
"single_word": false,
|
1337 |
+
"special": true
|
1338 |
+
},
|
1339 |
+
"128167": {
|
1340 |
+
"content": "<|reserved_special_token_159|>",
|
1341 |
+
"lstrip": false,
|
1342 |
+
"normalized": false,
|
1343 |
+
"rstrip": false,
|
1344 |
+
"single_word": false,
|
1345 |
+
"special": true
|
1346 |
+
},
|
1347 |
+
"128168": {
|
1348 |
+
"content": "<|reserved_special_token_160|>",
|
1349 |
+
"lstrip": false,
|
1350 |
+
"normalized": false,
|
1351 |
+
"rstrip": false,
|
1352 |
+
"single_word": false,
|
1353 |
+
"special": true
|
1354 |
+
},
|
1355 |
+
"128169": {
|
1356 |
+
"content": "<|reserved_special_token_161|>",
|
1357 |
+
"lstrip": false,
|
1358 |
+
"normalized": false,
|
1359 |
+
"rstrip": false,
|
1360 |
+
"single_word": false,
|
1361 |
+
"special": true
|
1362 |
+
},
|
1363 |
+
"128170": {
|
1364 |
+
"content": "<|reserved_special_token_162|>",
|
1365 |
+
"lstrip": false,
|
1366 |
+
"normalized": false,
|
1367 |
+
"rstrip": false,
|
1368 |
+
"single_word": false,
|
1369 |
+
"special": true
|
1370 |
+
},
|
1371 |
+
"128171": {
|
1372 |
+
"content": "<|reserved_special_token_163|>",
|
1373 |
+
"lstrip": false,
|
1374 |
+
"normalized": false,
|
1375 |
+
"rstrip": false,
|
1376 |
+
"single_word": false,
|
1377 |
+
"special": true
|
1378 |
+
},
|
1379 |
+
"128172": {
|
1380 |
+
"content": "<|reserved_special_token_164|>",
|
1381 |
+
"lstrip": false,
|
1382 |
+
"normalized": false,
|
1383 |
+
"rstrip": false,
|
1384 |
+
"single_word": false,
|
1385 |
+
"special": true
|
1386 |
+
},
|
1387 |
+
"128173": {
|
1388 |
+
"content": "<|reserved_special_token_165|>",
|
1389 |
+
"lstrip": false,
|
1390 |
+
"normalized": false,
|
1391 |
+
"rstrip": false,
|
1392 |
+
"single_word": false,
|
1393 |
+
"special": true
|
1394 |
+
},
|
1395 |
+
"128174": {
|
1396 |
+
"content": "<|reserved_special_token_166|>",
|
1397 |
+
"lstrip": false,
|
1398 |
+
"normalized": false,
|
1399 |
+
"rstrip": false,
|
1400 |
+
"single_word": false,
|
1401 |
+
"special": true
|
1402 |
+
},
|
1403 |
+
"128175": {
|
1404 |
+
"content": "<|reserved_special_token_167|>",
|
1405 |
+
"lstrip": false,
|
1406 |
+
"normalized": false,
|
1407 |
+
"rstrip": false,
|
1408 |
+
"single_word": false,
|
1409 |
+
"special": true
|
1410 |
+
},
|
1411 |
+
"128176": {
|
1412 |
+
"content": "<|reserved_special_token_168|>",
|
1413 |
+
"lstrip": false,
|
1414 |
+
"normalized": false,
|
1415 |
+
"rstrip": false,
|
1416 |
+
"single_word": false,
|
1417 |
+
"special": true
|
1418 |
+
},
|
1419 |
+
"128177": {
|
1420 |
+
"content": "<|reserved_special_token_169|>",
|
1421 |
+
"lstrip": false,
|
1422 |
+
"normalized": false,
|
1423 |
+
"rstrip": false,
|
1424 |
+
"single_word": false,
|
1425 |
+
"special": true
|
1426 |
+
},
|
1427 |
+
"128178": {
|
1428 |
+
"content": "<|reserved_special_token_170|>",
|
1429 |
+
"lstrip": false,
|
1430 |
+
"normalized": false,
|
1431 |
+
"rstrip": false,
|
1432 |
+
"single_word": false,
|
1433 |
+
"special": true
|
1434 |
+
},
|
1435 |
+
"128179": {
|
1436 |
+
"content": "<|reserved_special_token_171|>",
|
1437 |
+
"lstrip": false,
|
1438 |
+
"normalized": false,
|
1439 |
+
"rstrip": false,
|
1440 |
+
"single_word": false,
|
1441 |
+
"special": true
|
1442 |
+
},
|
1443 |
+
"128180": {
|
1444 |
+
"content": "<|reserved_special_token_172|>",
|
1445 |
+
"lstrip": false,
|
1446 |
+
"normalized": false,
|
1447 |
+
"rstrip": false,
|
1448 |
+
"single_word": false,
|
1449 |
+
"special": true
|
1450 |
+
},
|
1451 |
+
"128181": {
|
1452 |
+
"content": "<|reserved_special_token_173|>",
|
1453 |
+
"lstrip": false,
|
1454 |
+
"normalized": false,
|
1455 |
+
"rstrip": false,
|
1456 |
+
"single_word": false,
|
1457 |
+
"special": true
|
1458 |
+
},
|
1459 |
+
"128182": {
|
1460 |
+
"content": "<|reserved_special_token_174|>",
|
1461 |
+
"lstrip": false,
|
1462 |
+
"normalized": false,
|
1463 |
+
"rstrip": false,
|
1464 |
+
"single_word": false,
|
1465 |
+
"special": true
|
1466 |
+
},
|
1467 |
+
"128183": {
|
1468 |
+
"content": "<|reserved_special_token_175|>",
|
1469 |
+
"lstrip": false,
|
1470 |
+
"normalized": false,
|
1471 |
+
"rstrip": false,
|
1472 |
+
"single_word": false,
|
1473 |
+
"special": true
|
1474 |
+
},
|
1475 |
+
"128184": {
|
1476 |
+
"content": "<|reserved_special_token_176|>",
|
1477 |
+
"lstrip": false,
|
1478 |
+
"normalized": false,
|
1479 |
+
"rstrip": false,
|
1480 |
+
"single_word": false,
|
1481 |
+
"special": true
|
1482 |
+
},
|
1483 |
+
"128185": {
|
1484 |
+
"content": "<|reserved_special_token_177|>",
|
1485 |
+
"lstrip": false,
|
1486 |
+
"normalized": false,
|
1487 |
+
"rstrip": false,
|
1488 |
+
"single_word": false,
|
1489 |
+
"special": true
|
1490 |
+
},
|
1491 |
+
"128186": {
|
1492 |
+
"content": "<|reserved_special_token_178|>",
|
1493 |
+
"lstrip": false,
|
1494 |
+
"normalized": false,
|
1495 |
+
"rstrip": false,
|
1496 |
+
"single_word": false,
|
1497 |
+
"special": true
|
1498 |
+
},
|
1499 |
+
"128187": {
|
1500 |
+
"content": "<|reserved_special_token_179|>",
|
1501 |
+
"lstrip": false,
|
1502 |
+
"normalized": false,
|
1503 |
+
"rstrip": false,
|
1504 |
+
"single_word": false,
|
1505 |
+
"special": true
|
1506 |
+
},
|
1507 |
+
"128188": {
|
1508 |
+
"content": "<|reserved_special_token_180|>",
|
1509 |
+
"lstrip": false,
|
1510 |
+
"normalized": false,
|
1511 |
+
"rstrip": false,
|
1512 |
+
"single_word": false,
|
1513 |
+
"special": true
|
1514 |
+
},
|
1515 |
+
"128189": {
|
1516 |
+
"content": "<|reserved_special_token_181|>",
|
1517 |
+
"lstrip": false,
|
1518 |
+
"normalized": false,
|
1519 |
+
"rstrip": false,
|
1520 |
+
"single_word": false,
|
1521 |
+
"special": true
|
1522 |
+
},
|
1523 |
+
"128190": {
|
1524 |
+
"content": "<|reserved_special_token_182|>",
|
1525 |
+
"lstrip": false,
|
1526 |
+
"normalized": false,
|
1527 |
+
"rstrip": false,
|
1528 |
+
"single_word": false,
|
1529 |
+
"special": true
|
1530 |
+
},
|
1531 |
+
"128191": {
|
1532 |
+
"content": "<|reserved_special_token_183|>",
|
1533 |
+
"lstrip": false,
|
1534 |
+
"normalized": false,
|
1535 |
+
"rstrip": false,
|
1536 |
+
"single_word": false,
|
1537 |
+
"special": true
|
1538 |
+
},
|
1539 |
+
"128192": {
|
1540 |
+
"content": "<|reserved_special_token_184|>",
|
1541 |
+
"lstrip": false,
|
1542 |
+
"normalized": false,
|
1543 |
+
"rstrip": false,
|
1544 |
+
"single_word": false,
|
1545 |
+
"special": true
|
1546 |
+
},
|
1547 |
+
"128193": {
|
1548 |
+
"content": "<|reserved_special_token_185|>",
|
1549 |
+
"lstrip": false,
|
1550 |
+
"normalized": false,
|
1551 |
+
"rstrip": false,
|
1552 |
+
"single_word": false,
|
1553 |
+
"special": true
|
1554 |
+
},
|
1555 |
+
"128194": {
|
1556 |
+
"content": "<|reserved_special_token_186|>",
|
1557 |
+
"lstrip": false,
|
1558 |
+
"normalized": false,
|
1559 |
+
"rstrip": false,
|
1560 |
+
"single_word": false,
|
1561 |
+
"special": true
|
1562 |
+
},
|
1563 |
+
"128195": {
|
1564 |
+
"content": "<|reserved_special_token_187|>",
|
1565 |
+
"lstrip": false,
|
1566 |
+
"normalized": false,
|
1567 |
+
"rstrip": false,
|
1568 |
+
"single_word": false,
|
1569 |
+
"special": true
|
1570 |
+
},
|
1571 |
+
"128196": {
|
1572 |
+
"content": "<|reserved_special_token_188|>",
|
1573 |
+
"lstrip": false,
|
1574 |
+
"normalized": false,
|
1575 |
+
"rstrip": false,
|
1576 |
+
"single_word": false,
|
1577 |
+
"special": true
|
1578 |
+
},
|
1579 |
+
"128197": {
|
1580 |
+
"content": "<|reserved_special_token_189|>",
|
1581 |
+
"lstrip": false,
|
1582 |
+
"normalized": false,
|
1583 |
+
"rstrip": false,
|
1584 |
+
"single_word": false,
|
1585 |
+
"special": true
|
1586 |
+
},
|
1587 |
+
"128198": {
|
1588 |
+
"content": "<|reserved_special_token_190|>",
|
1589 |
+
"lstrip": false,
|
1590 |
+
"normalized": false,
|
1591 |
+
"rstrip": false,
|
1592 |
+
"single_word": false,
|
1593 |
+
"special": true
|
1594 |
+
},
|
1595 |
+
"128199": {
|
1596 |
+
"content": "<|reserved_special_token_191|>",
|
1597 |
+
"lstrip": false,
|
1598 |
+
"normalized": false,
|
1599 |
+
"rstrip": false,
|
1600 |
+
"single_word": false,
|
1601 |
+
"special": true
|
1602 |
+
},
|
1603 |
+
"128200": {
|
1604 |
+
"content": "<|reserved_special_token_192|>",
|
1605 |
+
"lstrip": false,
|
1606 |
+
"normalized": false,
|
1607 |
+
"rstrip": false,
|
1608 |
+
"single_word": false,
|
1609 |
+
"special": true
|
1610 |
+
},
|
1611 |
+
"128201": {
|
1612 |
+
"content": "<|reserved_special_token_193|>",
|
1613 |
+
"lstrip": false,
|
1614 |
+
"normalized": false,
|
1615 |
+
"rstrip": false,
|
1616 |
+
"single_word": false,
|
1617 |
+
"special": true
|
1618 |
+
},
|
1619 |
+
"128202": {
|
1620 |
+
"content": "<|reserved_special_token_194|>",
|
1621 |
+
"lstrip": false,
|
1622 |
+
"normalized": false,
|
1623 |
+
"rstrip": false,
|
1624 |
+
"single_word": false,
|
1625 |
+
"special": true
|
1626 |
+
},
|
1627 |
+
"128203": {
|
1628 |
+
"content": "<|reserved_special_token_195|>",
|
1629 |
+
"lstrip": false,
|
1630 |
+
"normalized": false,
|
1631 |
+
"rstrip": false,
|
1632 |
+
"single_word": false,
|
1633 |
+
"special": true
|
1634 |
+
},
|
1635 |
+
"128204": {
|
1636 |
+
"content": "<|reserved_special_token_196|>",
|
1637 |
+
"lstrip": false,
|
1638 |
+
"normalized": false,
|
1639 |
+
"rstrip": false,
|
1640 |
+
"single_word": false,
|
1641 |
+
"special": true
|
1642 |
+
},
|
1643 |
+
"128205": {
|
1644 |
+
"content": "<|reserved_special_token_197|>",
|
1645 |
+
"lstrip": false,
|
1646 |
+
"normalized": false,
|
1647 |
+
"rstrip": false,
|
1648 |
+
"single_word": false,
|
1649 |
+
"special": true
|
1650 |
+
},
|
1651 |
+
"128206": {
|
1652 |
+
"content": "<|reserved_special_token_198|>",
|
1653 |
+
"lstrip": false,
|
1654 |
+
"normalized": false,
|
1655 |
+
"rstrip": false,
|
1656 |
+
"single_word": false,
|
1657 |
+
"special": true
|
1658 |
+
},
|
1659 |
+
"128207": {
|
1660 |
+
"content": "<|reserved_special_token_199|>",
|
1661 |
+
"lstrip": false,
|
1662 |
+
"normalized": false,
|
1663 |
+
"rstrip": false,
|
1664 |
+
"single_word": false,
|
1665 |
+
"special": true
|
1666 |
+
},
|
1667 |
+
"128208": {
|
1668 |
+
"content": "<|reserved_special_token_200|>",
|
1669 |
+
"lstrip": false,
|
1670 |
+
"normalized": false,
|
1671 |
+
"rstrip": false,
|
1672 |
+
"single_word": false,
|
1673 |
+
"special": true
|
1674 |
+
},
|
1675 |
+
"128209": {
|
1676 |
+
"content": "<|reserved_special_token_201|>",
|
1677 |
+
"lstrip": false,
|
1678 |
+
"normalized": false,
|
1679 |
+
"rstrip": false,
|
1680 |
+
"single_word": false,
|
1681 |
+
"special": true
|
1682 |
+
},
|
1683 |
+
"128210": {
|
1684 |
+
"content": "<|reserved_special_token_202|>",
|
1685 |
+
"lstrip": false,
|
1686 |
+
"normalized": false,
|
1687 |
+
"rstrip": false,
|
1688 |
+
"single_word": false,
|
1689 |
+
"special": true
|
1690 |
+
},
|
1691 |
+
"128211": {
|
1692 |
+
"content": "<|reserved_special_token_203|>",
|
1693 |
+
"lstrip": false,
|
1694 |
+
"normalized": false,
|
1695 |
+
"rstrip": false,
|
1696 |
+
"single_word": false,
|
1697 |
+
"special": true
|
1698 |
+
},
|
1699 |
+
"128212": {
|
1700 |
+
"content": "<|reserved_special_token_204|>",
|
1701 |
+
"lstrip": false,
|
1702 |
+
"normalized": false,
|
1703 |
+
"rstrip": false,
|
1704 |
+
"single_word": false,
|
1705 |
+
"special": true
|
1706 |
+
},
|
1707 |
+
"128213": {
|
1708 |
+
"content": "<|reserved_special_token_205|>",
|
1709 |
+
"lstrip": false,
|
1710 |
+
"normalized": false,
|
1711 |
+
"rstrip": false,
|
1712 |
+
"single_word": false,
|
1713 |
+
"special": true
|
1714 |
+
},
|
1715 |
+
"128214": {
|
1716 |
+
"content": "<|reserved_special_token_206|>",
|
1717 |
+
"lstrip": false,
|
1718 |
+
"normalized": false,
|
1719 |
+
"rstrip": false,
|
1720 |
+
"single_word": false,
|
1721 |
+
"special": true
|
1722 |
+
},
|
1723 |
+
"128215": {
|
1724 |
+
"content": "<|reserved_special_token_207|>",
|
1725 |
+
"lstrip": false,
|
1726 |
+
"normalized": false,
|
1727 |
+
"rstrip": false,
|
1728 |
+
"single_word": false,
|
1729 |
+
"special": true
|
1730 |
+
},
|
1731 |
+
"128216": {
|
1732 |
+
"content": "<|reserved_special_token_208|>",
|
1733 |
+
"lstrip": false,
|
1734 |
+
"normalized": false,
|
1735 |
+
"rstrip": false,
|
1736 |
+
"single_word": false,
|
1737 |
+
"special": true
|
1738 |
+
},
|
1739 |
+
"128217": {
|
1740 |
+
"content": "<|reserved_special_token_209|>",
|
1741 |
+
"lstrip": false,
|
1742 |
+
"normalized": false,
|
1743 |
+
"rstrip": false,
|
1744 |
+
"single_word": false,
|
1745 |
+
"special": true
|
1746 |
+
},
|
1747 |
+
"128218": {
|
1748 |
+
"content": "<|reserved_special_token_210|>",
|
1749 |
+
"lstrip": false,
|
1750 |
+
"normalized": false,
|
1751 |
+
"rstrip": false,
|
1752 |
+
"single_word": false,
|
1753 |
+
"special": true
|
1754 |
+
},
|
1755 |
+
"128219": {
|
1756 |
+
"content": "<|reserved_special_token_211|>",
|
1757 |
+
"lstrip": false,
|
1758 |
+
"normalized": false,
|
1759 |
+
"rstrip": false,
|
1760 |
+
"single_word": false,
|
1761 |
+
"special": true
|
1762 |
+
},
|
1763 |
+
"128220": {
|
1764 |
+
"content": "<|reserved_special_token_212|>",
|
1765 |
+
"lstrip": false,
|
1766 |
+
"normalized": false,
|
1767 |
+
"rstrip": false,
|
1768 |
+
"single_word": false,
|
1769 |
+
"special": true
|
1770 |
+
},
|
1771 |
+
"128221": {
|
1772 |
+
"content": "<|reserved_special_token_213|>",
|
1773 |
+
"lstrip": false,
|
1774 |
+
"normalized": false,
|
1775 |
+
"rstrip": false,
|
1776 |
+
"single_word": false,
|
1777 |
+
"special": true
|
1778 |
+
},
|
1779 |
+
"128222": {
|
1780 |
+
"content": "<|reserved_special_token_214|>",
|
1781 |
+
"lstrip": false,
|
1782 |
+
"normalized": false,
|
1783 |
+
"rstrip": false,
|
1784 |
+
"single_word": false,
|
1785 |
+
"special": true
|
1786 |
+
},
|
1787 |
+
"128223": {
|
1788 |
+
"content": "<|reserved_special_token_215|>",
|
1789 |
+
"lstrip": false,
|
1790 |
+
"normalized": false,
|
1791 |
+
"rstrip": false,
|
1792 |
+
"single_word": false,
|
1793 |
+
"special": true
|
1794 |
+
},
|
1795 |
+
"128224": {
|
1796 |
+
"content": "<|reserved_special_token_216|>",
|
1797 |
+
"lstrip": false,
|
1798 |
+
"normalized": false,
|
1799 |
+
"rstrip": false,
|
1800 |
+
"single_word": false,
|
1801 |
+
"special": true
|
1802 |
+
},
|
1803 |
+
"128225": {
|
1804 |
+
"content": "<|reserved_special_token_217|>",
|
1805 |
+
"lstrip": false,
|
1806 |
+
"normalized": false,
|
1807 |
+
"rstrip": false,
|
1808 |
+
"single_word": false,
|
1809 |
+
"special": true
|
1810 |
+
},
|
1811 |
+
"128226": {
|
1812 |
+
"content": "<|reserved_special_token_218|>",
|
1813 |
+
"lstrip": false,
|
1814 |
+
"normalized": false,
|
1815 |
+
"rstrip": false,
|
1816 |
+
"single_word": false,
|
1817 |
+
"special": true
|
1818 |
+
},
|
1819 |
+
"128227": {
|
1820 |
+
"content": "<|reserved_special_token_219|>",
|
1821 |
+
"lstrip": false,
|
1822 |
+
"normalized": false,
|
1823 |
+
"rstrip": false,
|
1824 |
+
"single_word": false,
|
1825 |
+
"special": true
|
1826 |
+
},
|
1827 |
+
"128228": {
|
1828 |
+
"content": "<|reserved_special_token_220|>",
|
1829 |
+
"lstrip": false,
|
1830 |
+
"normalized": false,
|
1831 |
+
"rstrip": false,
|
1832 |
+
"single_word": false,
|
1833 |
+
"special": true
|
1834 |
+
},
|
1835 |
+
"128229": {
|
1836 |
+
"content": "<|reserved_special_token_221|>",
|
1837 |
+
"lstrip": false,
|
1838 |
+
"normalized": false,
|
1839 |
+
"rstrip": false,
|
1840 |
+
"single_word": false,
|
1841 |
+
"special": true
|
1842 |
+
},
|
1843 |
+
"128230": {
|
1844 |
+
"content": "<|reserved_special_token_222|>",
|
1845 |
+
"lstrip": false,
|
1846 |
+
"normalized": false,
|
1847 |
+
"rstrip": false,
|
1848 |
+
"single_word": false,
|
1849 |
+
"special": true
|
1850 |
+
},
|
1851 |
+
"128231": {
|
1852 |
+
"content": "<|reserved_special_token_223|>",
|
1853 |
+
"lstrip": false,
|
1854 |
+
"normalized": false,
|
1855 |
+
"rstrip": false,
|
1856 |
+
"single_word": false,
|
1857 |
+
"special": true
|
1858 |
+
},
|
1859 |
+
"128232": {
|
1860 |
+
"content": "<|reserved_special_token_224|>",
|
1861 |
+
"lstrip": false,
|
1862 |
+
"normalized": false,
|
1863 |
+
"rstrip": false,
|
1864 |
+
"single_word": false,
|
1865 |
+
"special": true
|
1866 |
+
},
|
1867 |
+
"128233": {
|
1868 |
+
"content": "<|reserved_special_token_225|>",
|
1869 |
+
"lstrip": false,
|
1870 |
+
"normalized": false,
|
1871 |
+
"rstrip": false,
|
1872 |
+
"single_word": false,
|
1873 |
+
"special": true
|
1874 |
+
},
|
1875 |
+
"128234": {
|
1876 |
+
"content": "<|reserved_special_token_226|>",
|
1877 |
+
"lstrip": false,
|
1878 |
+
"normalized": false,
|
1879 |
+
"rstrip": false,
|
1880 |
+
"single_word": false,
|
1881 |
+
"special": true
|
1882 |
+
},
|
1883 |
+
"128235": {
|
1884 |
+
"content": "<|reserved_special_token_227|>",
|
1885 |
+
"lstrip": false,
|
1886 |
+
"normalized": false,
|
1887 |
+
"rstrip": false,
|
1888 |
+
"single_word": false,
|
1889 |
+
"special": true
|
1890 |
+
},
|
1891 |
+
"128236": {
|
1892 |
+
"content": "<|reserved_special_token_228|>",
|
1893 |
+
"lstrip": false,
|
1894 |
+
"normalized": false,
|
1895 |
+
"rstrip": false,
|
1896 |
+
"single_word": false,
|
1897 |
+
"special": true
|
1898 |
+
},
|
1899 |
+
"128237": {
|
1900 |
+
"content": "<|reserved_special_token_229|>",
|
1901 |
+
"lstrip": false,
|
1902 |
+
"normalized": false,
|
1903 |
+
"rstrip": false,
|
1904 |
+
"single_word": false,
|
1905 |
+
"special": true
|
1906 |
+
},
|
1907 |
+
"128238": {
|
1908 |
+
"content": "<|reserved_special_token_230|>",
|
1909 |
+
"lstrip": false,
|
1910 |
+
"normalized": false,
|
1911 |
+
"rstrip": false,
|
1912 |
+
"single_word": false,
|
1913 |
+
"special": true
|
1914 |
+
},
|
1915 |
+
"128239": {
|
1916 |
+
"content": "<|reserved_special_token_231|>",
|
1917 |
+
"lstrip": false,
|
1918 |
+
"normalized": false,
|
1919 |
+
"rstrip": false,
|
1920 |
+
"single_word": false,
|
1921 |
+
"special": true
|
1922 |
+
},
|
1923 |
+
"128240": {
|
1924 |
+
"content": "<|reserved_special_token_232|>",
|
1925 |
+
"lstrip": false,
|
1926 |
+
"normalized": false,
|
1927 |
+
"rstrip": false,
|
1928 |
+
"single_word": false,
|
1929 |
+
"special": true
|
1930 |
+
},
|
1931 |
+
"128241": {
|
1932 |
+
"content": "<|reserved_special_token_233|>",
|
1933 |
+
"lstrip": false,
|
1934 |
+
"normalized": false,
|
1935 |
+
"rstrip": false,
|
1936 |
+
"single_word": false,
|
1937 |
+
"special": true
|
1938 |
+
},
|
1939 |
+
"128242": {
|
1940 |
+
"content": "<|reserved_special_token_234|>",
|
1941 |
+
"lstrip": false,
|
1942 |
+
"normalized": false,
|
1943 |
+
"rstrip": false,
|
1944 |
+
"single_word": false,
|
1945 |
+
"special": true
|
1946 |
+
},
|
1947 |
+
"128243": {
|
1948 |
+
"content": "<|reserved_special_token_235|>",
|
1949 |
+
"lstrip": false,
|
1950 |
+
"normalized": false,
|
1951 |
+
"rstrip": false,
|
1952 |
+
"single_word": false,
|
1953 |
+
"special": true
|
1954 |
+
},
|
1955 |
+
"128244": {
|
1956 |
+
"content": "<|reserved_special_token_236|>",
|
1957 |
+
"lstrip": false,
|
1958 |
+
"normalized": false,
|
1959 |
+
"rstrip": false,
|
1960 |
+
"single_word": false,
|
1961 |
+
"special": true
|
1962 |
+
},
|
1963 |
+
"128245": {
|
1964 |
+
"content": "<|reserved_special_token_237|>",
|
1965 |
+
"lstrip": false,
|
1966 |
+
"normalized": false,
|
1967 |
+
"rstrip": false,
|
1968 |
+
"single_word": false,
|
1969 |
+
"special": true
|
1970 |
+
},
|
1971 |
+
"128246": {
|
1972 |
+
"content": "<|reserved_special_token_238|>",
|
1973 |
+
"lstrip": false,
|
1974 |
+
"normalized": false,
|
1975 |
+
"rstrip": false,
|
1976 |
+
"single_word": false,
|
1977 |
+
"special": true
|
1978 |
+
},
|
1979 |
+
"128247": {
|
1980 |
+
"content": "<|reserved_special_token_239|>",
|
1981 |
+
"lstrip": false,
|
1982 |
+
"normalized": false,
|
1983 |
+
"rstrip": false,
|
1984 |
+
"single_word": false,
|
1985 |
+
"special": true
|
1986 |
+
},
|
1987 |
+
"128248": {
|
1988 |
+
"content": "<|reserved_special_token_240|>",
|
1989 |
+
"lstrip": false,
|
1990 |
+
"normalized": false,
|
1991 |
+
"rstrip": false,
|
1992 |
+
"single_word": false,
|
1993 |
+
"special": true
|
1994 |
+
},
|
1995 |
+
"128249": {
|
1996 |
+
"content": "<|reserved_special_token_241|>",
|
1997 |
+
"lstrip": false,
|
1998 |
+
"normalized": false,
|
1999 |
+
"rstrip": false,
|
2000 |
+
"single_word": false,
|
2001 |
+
"special": true
|
2002 |
+
},
|
2003 |
+
"128250": {
|
2004 |
+
"content": "<|reserved_special_token_242|>",
|
2005 |
+
"lstrip": false,
|
2006 |
+
"normalized": false,
|
2007 |
+
"rstrip": false,
|
2008 |
+
"single_word": false,
|
2009 |
+
"special": true
|
2010 |
+
},
|
2011 |
+
"128251": {
|
2012 |
+
"content": "<|reserved_special_token_243|>",
|
2013 |
+
"lstrip": false,
|
2014 |
+
"normalized": false,
|
2015 |
+
"rstrip": false,
|
2016 |
+
"single_word": false,
|
2017 |
+
"special": true
|
2018 |
+
},
|
2019 |
+
"128252": {
|
2020 |
+
"content": "<|reserved_special_token_244|>",
|
2021 |
+
"lstrip": false,
|
2022 |
+
"normalized": false,
|
2023 |
+
"rstrip": false,
|
2024 |
+
"single_word": false,
|
2025 |
+
"special": true
|
2026 |
+
},
|
2027 |
+
"128253": {
|
2028 |
+
"content": "<|reserved_special_token_245|>",
|
2029 |
+
"lstrip": false,
|
2030 |
+
"normalized": false,
|
2031 |
+
"rstrip": false,
|
2032 |
+
"single_word": false,
|
2033 |
+
"special": true
|
2034 |
+
},
|
2035 |
+
"128254": {
|
2036 |
+
"content": "<|reserved_special_token_246|>",
|
2037 |
+
"lstrip": false,
|
2038 |
+
"normalized": false,
|
2039 |
+
"rstrip": false,
|
2040 |
+
"single_word": false,
|
2041 |
+
"special": true
|
2042 |
+
},
|
2043 |
+
"128255": {
|
2044 |
+
"content": "<|reserved_special_token_247|>",
|
2045 |
+
"lstrip": false,
|
2046 |
+
"normalized": false,
|
2047 |
+
"rstrip": false,
|
2048 |
+
"single_word": false,
|
2049 |
+
"special": true
|
2050 |
+
}
|
2051 |
+
},
|
2052 |
+
"bos_token": "<|begin_of_text|>",
|
2053 |
+
"chat_template": "{{- bos_token }}\n{%- if custom_tools is defined %}\n {%- set tools = custom_tools %}\n{%- endif %}\n{%- if not tools_in_user_message is defined %}\n {%- set tools_in_user_message = true %}\n{%- endif %}\n{%- if not date_string is defined %}\n {%- if strftime_now is defined %}\n {%- set date_string = strftime_now(\"%d %b %Y\") %}\n {%- else %}\n {%- set date_string = \"26 Jul 2024\" %}\n {%- endif %}\n{%- endif %}\n{%- if not tools is defined %}\n {%- set tools = none %}\n{%- endif %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n{%- else %}\n {%- set system_message = \"\" %}\n{%- endif %}\n\n{#- System message #}\n{{- \"<|start_header_id|>system<|end_header_id|>\\n\\n\" }}\n{%- if tools is not none %}\n {{- \"Environment: ipython\\n\" }}\n{%- endif %}\n{{- \"Cutting Knowledge Date: December 2023\\n\" }}\n{{- \"Today Date: \" + date_string + \"\\n\\n\" }}\n{%- if tools is not none and not tools_in_user_message %}\n {{- \"You have access to the following functions. To call a function, please respond with JSON for a function call.\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n{%- endif %}\n{{- system_message }}\n{{- \"<|eot_id|>\" }}\n\n{#- Custom tools are passed in a user message with some extra guidance #}\n{%- if tools_in_user_message and not tools is none %}\n {#- Extract the first user message so we can plug it in here #}\n {%- if messages | length != 0 %}\n {%- set first_user_message = messages[0]['content']|trim %}\n {%- set messages = messages[1:] %}\n {%- else %}\n {{- raise_exception(\"Cannot put tools in the first user message when there's no first user message!\") }}\n{%- endif %}\n {{- '<|start_header_id|>user<|end_header_id|>\\n\\n' -}}\n {{- \"Given the following functions, please respond with a JSON for a function call \" }}\n {{- \"with its proper arguments that best answers the given prompt.\\n\\n\" }}\n {{- 'Respond in the format {\"name\": function name, \"parameters\": dictionary of argument name and its value}.' }}\n {{- \"Do not use variables.\\n\\n\" }}\n {%- for t in tools %}\n {{- t | tojson(indent=4) }}\n {{- \"\\n\\n\" }}\n {%- endfor %}\n {{- first_user_message + \"<|eot_id|>\"}}\n{%- endif %}\n\n{%- for message in messages %}\n {%- if not (message.role == 'ipython' or message.role == 'tool' or 'tool_calls' in message) %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' }}\n {%- elif 'tool_calls' in message %}\n {%- if not message.tool_calls|length == 1 %}\n {{- raise_exception(\"This model only supports single tool-calls at once!\") }}\n {%- endif %}\n {%- set tool_call = message.tool_calls[0].function %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' -}}\n {{- '{\"name\": \"' + tool_call.name + '\", ' }}\n {{- '\"parameters\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- \"}\" }}\n {{- \"<|eot_id|>\" }}\n {%- elif message.role == \"tool\" or message.role == \"ipython\" %}\n {{- \"<|start_header_id|>ipython<|end_header_id|>\\n\\n\" }}\n {%- if message.content is mapping or message.content is iterable %}\n {{- message.content | tojson }}\n {%- else %}\n {{- message.content }}\n {%- endif %}\n {{- \"<|eot_id|>\" }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}\n",
|
2054 |
+
"clean_up_tokenization_spaces": true,
|
2055 |
+
"eos_token": "<|eot_id|>",
|
2056 |
+
"model_input_names": [
|
2057 |
+
"input_ids",
|
2058 |
+
"attention_mask"
|
2059 |
+
],
|
2060 |
+
"model_max_length": 131072,
|
2061 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
2062 |
+
}
|