File size: 5,815 Bytes
2a25271 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
## Credits and Acknowledgments
TURBOPASTA is built upon the excellent work of [Fast Apply](https://github.com/kortix-ai/fast-apply) by Kortix AI. Our model leverages their dataset and builds on their pioneering approach to code merging and transformation. Key inspirations include:
- Dataset structure and generation methodology
- XML-based prompt engineering approach
- Evaluation metrics and benchmarking approaches
Special thanks to:
- The Kortix AI team for open-sourcing Fast Apply
- Their foundational work on high-speed code transformation models
- The comprehensive dataset they've made available to the community
While TURBOPASTA introduces its own innovations, the groundwork laid by Fast Apply was instrumental in making this project possible. We encourage users interested in code transformation models to also check out the original Fast Apply models:
- [FastApply-7B-v1.0](https://huggingface.co/Kortix/FastApply-7B-v1.0)
- [FastApply-1.5B-v1.0](https://huggingface.co/Kortix/FastApply-1.5B-v1.0)
- [FastApply-dataset-v1.0](https://huggingface.co/datasets/Kortix/FastApply-dataset-v1.0)
This project is licensed under Apache-2.0, consistent with Fast Apply's open-source ethos.
---------------------------------------------------------------------------
Based on a dataset inspired by https://www.kortix.ai/
# TURBOPASTA LoRA Adapter for Qwen2.5-3B
A LoRA adapter for unsloth/Qwen2.5-3B that merges code updates using chain-of-thought reasoning and maintains strict adherence to original code structure and formatting.
## Technical Specifications
### Base Model
- Model: unsloth/Qwen2.5-3B
- LoRA Rank: 64
- Target Modules: v_proj, o_proj, down_proj, up_proj, q_proj, k_proj, gate_proj
- Task: CAUSAL_LM
- Dropout: 0
- Alpha: 32
### Input/Output Format
Input XML structure:
```xml
<instruction>You are a coding assistant that helps merge code updates, ensuring every modification is fully integrated. Merge all changes from the snippet into the code. Preserve the code's structure, order, comments, and indentation exactly.</instruction>
<fastapply>
<code>
{original_code}
</code>
<update>
{update_snippet}
</update>
<finalcode>
{merged_result}
</finalcode>
</fastapply>
```
The model supports multiple `<fastapply>` blocks for few-shot context learning. Use your stop token as `</fastapply>`.
## Deployment
### VLLM Server Setup
```bash
export VLLM_ALLOW_RUNTIME_LORA_UPDATING=1
export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
vllm serve unsloth/qwen2.5-3b \
--gpu-memory-utilization=1 \
--port 6002 \
--served-model-name="turbopasta" \
--trust-remote-code \
--max-model-len 8192 \
--disable-log-requests \
--enable-lora \
--lora-modules lora=./dataset/output/turbopasta/lora_model \
--max-lora-rank 64
```
### Client Implementation
```python
import requests
def merge_code(original_code: str, update_snippet: str, vllm_url: str = "http://localhost:6002/v1/completions") -> dict:
xml_content = (
'<instruction>You are a coding assistant that helps merge code updates, ensuring every modification is fully '
'integrated. Merge all changes from the snippet into the code. Preserve the code\'s structure, order, comments, '
'and indentation exactly.</instruction>\n'
'<fastapply>\n'
' <code>\n'
f'{original_code}\n'
' </code>\n'
' <update>\n'
f'{update_snippet}\n'
' </update>'
)
response = requests.post(
vllm_url,
json={
"prompt": xml_content,
"max_tokens": 6000,
"temperature": 0.1,
"model": "lora",
"stop": ["</fastapply>"]
},
timeout=30000
)
completion = response.json()["choices"][0]["text"]
# Parse XML tags
import re
def extract_tag(tag: str) -> str:
match = re.search(f'<{tag}>(.*?)</{tag}>', completion, re.DOTALL)
return match.group(1).strip() if match else ""
return {
"merged_code": extract_tag("finalcode")
}
```
### Batch Processing
The model works with the included data processor for parallel processing of code updates:
```python
from request_processor import RequestProcessor
processor = RequestProcessor(
input_file="updates.jsonl",
output_file="merged.jsonl",
num_threads=24
)
processor.process_file()
```
Input JSONL format:
```json
{
"id": "update_id",
"original_code": "...",
"update_snippet": "...",
"file_path": "path/to/file"
}
```
Output JSONL format:
```json
{
"id": "update_id",
"original_code": "...",
"update_snippet": "...",
"merged_code": "...",
"file_path": "path/to/file",
"processed_at": "2024-10-24 02:52:33"
}
```
## Implementation and Performance Considerations
- Uses thread pooling for parallel processing
- Atomic writes with file locking
- Progress tracking with tqdm
- Automatic error handling and logging
- Configurable thread count for optimization
- Temperature set to 0.1 for consistent merges
## Error Handling
Errors are captured in the output JSONL:
```json
{
"error": "error message",
"processed_at": "timestamp"
}
```
Monitor errors in real-time:
```bash
tail -f merged.jsonl | grep error
```
## Model Training Details
This model was trained using Force Multiplier's autotuning pipeline with the following key characteristics:
- Base Model: unsloth/Qwen2.5-3B
- Training Type: Few-shot learning with chain-of-thought reasoning
- Special Focus: Code structure preservation and merge accuracy
- LoRA Configuration: Optimized for code understanding and generation
## Limitations
- Maximum context length of 8192 tokens
- Best suited for single-file code changes
- May require multiple passes for complex refactoring
- Not recommended for binary file merges |