File size: 13,870 Bytes
3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee cf4c061 3464aee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 |
---
base_model: meta-llama/Llama-3.2-3B
library_name: peft
license: llama3.2
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: code-knowledge-eval
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-3B-Code-Knowledge-Value-Eval-lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the [kimsan0622/code-knowledge-eval](https://huggingface.co/datasets/kimsan0622/code-knowledge-eval) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9173
- Accuracy: 0.5945
## **Model Description**
The model trained on the **Code Knowledge Value Evaluation Dataset** is designed to assess the educational and knowledge value of code snippets. It leverages patterns and contextual information from a large collection of open-source code, sourced from the `bigcode/the-stack` repository. By analyzing these code samples, the model can evaluate their utility in teaching coding concepts, solving problems, and improving developer education.
The model focuses on understanding the structure, syntax, and logic of various programming languages, enabling it to provide insights into the learning potential and technical depth of different code samples. The dataset used for training consists of 22,786 samples for training, 4,555 for validation, and 18,232 for testing, ensuring that the model is both robust and well-generalized across different coding contexts.
## **Intended Uses & Limitations**
### **Intended Uses**:
1. **Automated Code Review**: The model can be applied in automated systems to assess the knowledge value of code during code review processes. It can help identify areas where code could be optimized for better readability, maintainability, and educational impact.
2. **Educational Feedback**: For instructors and educational platforms, the model can offer feedback on the effectiveness of code samples used in teaching, helping to improve curriculum materials and select code that best conveys core programming concepts.
3. **Curriculum Development**: The model can aid in designing coding courses or instructional materials by suggesting code examples that have higher educational value, supporting a more effective learning experience.
4. **Technical Skill Assessment**: Organizations or platforms can use the model to assess the complexity and educational value of code submissions in coding challenges or exams.
### **Limitations**:
1. **Narrow Scope in Knowledge Evaluation**: The model is specialized in evaluating code from an educational standpoint, focusing primarily on learning potential rather than production-level code quality (e.g., performance optimization or security).
2. **Language and Domain Limitations**: Since the dataset is sourced from `bigcode/the-stack`, it may not cover all programming languages or specialized domains. The model may perform less effectively in underrepresented languages or niche coding styles not well-represented in the dataset.
3. **Not Suitable for All Educational Levels**: While the model is designed to evaluate code for educational purposes, its outputs may be better suited for certain levels (e.g., beginner or intermediate coding), and its recommendations might not fully cater to advanced or highly specialized learners.
## How to use this model?
```python
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel, PeftConfig
# Define the model name or path for loading the tokenizer and model using LoRA fine-tuning
model_name_or_path = "kimsan0622/Llama-3.2-3B-Code-Knowledge-Value-Eval-lora"
# Load the PEFT (Parameter-Efficient Fine-Tuning) configuration from the pretrained model
config = PeftConfig.from_pretrained(model_name_or_path)
# Load the base model for sequence classification, setting up for 6 possible labels
inference_model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path, # Base model path
device_map="cuda:0", # Use the first CUDA device for inference
label2id={str(k): k for k in range(6)}, # Map label names (0-5) to IDs
id2label={k: str(k) for k in range(6)}, # Map label IDs to names (0-5)
num_labels=6, # Define the number of labels for classification (0 to 5)
)
# Load the tokenizer for the base model
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Set the padding token if it is not already defined, matching it with the EOS token
if not tokenizer.pad_token_id:
tokenizer.pad_token_id = tokenizer.eos_token_id
inference_model.config.pad_token_id = inference_model.config.eos_token_id
# Load the PEFT model using the pre-trained LoRA model and the base model
model = PeftModel.from_pretrained(inference_model, model_name_or_path)
# Sample code input to evaluate
code = [
"""
import torch
import numpy as np
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel, PeftConfig
# Define the model name or path for loading the tokenizer and model using LoRA fine-tuning
model_name_or_path = "kimsan0622/Llama-3.2-3B-Code-Knowledge-Value-Eval-lora"
# Load the PEFT (Parameter-Efficient Fine-Tuning) configuration from the pretrained model
config = PeftConfig.from_pretrained(model_name_or_path)
# Load the base model for sequence classification, setting up for 6 possible labels
inference_model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path, # Base model path
device_map="cuda:0", # Use the first CUDA device for inference
label2id={str(k): k for k in range(6)}, # Map label names (0-5) to IDs
id2label={k: str(k) for k in range(6)}, # Map label IDs to names (0-5)
num_labels=6, # Define the number of labels for classification (0 to 5)
)
# Load the tokenizer for the base model
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Set the padding token if it is not already defined, matching it with the EOS token
if not tokenizer.pad_token_id:
tokenizer.pad_token_id = tokenizer.eos_token_id
inference_model.config.pad_token_id = inference_model.config.eos_token_id
# Load the PEFT model using the pre-trained LoRA model and the base model
model = PeftModel.from_pretrained(inference_model, model_name_or_path)
# Sample code input to evaluate
code = ["code"]
# Tokenize the input code, setting the maximum length and ensuring proper padding and truncation
batch = tokenizer(code, max_length=1024, padding=True, truncation=True, return_tensors="pt")
# Perform inference without computing gradients for faster processing
with torch.no_grad():
# Pass the input IDs and attention mask to the model for prediction
res = model(
input_ids=batch["input_ids"].to("cuda:0"),
attention_mask=batch["attention_mask"].to("cuda:0"),
)
# Move the logits to the CPU and convert them to a numpy array
preds = res.logits.cpu().numpy()
# Get the predicted label by taking the argmax of the logits
preds = np.argmax(preds, axis=1).tolist()
# Print the predicted labels
print(preds)
"""
]
# Tokenize the input code, setting the maximum length and ensuring proper padding and truncation
batch = tokenizer(code, max_length=1024, padding=True, truncation=True, return_tensors="pt")
# Perform inference without computing gradients for faster processing
with torch.no_grad():
# Pass the input IDs and attention mask to the model for prediction
res = model(
input_ids=batch["input_ids"].to("cuda:0"),
attention_mask=batch["attention_mask"].to("cuda:0"),
)
# Move the logits to the CPU and convert them to a numpy array
preds = res.logits.cpu().numpy()
# Get the predicted label by taking the argmax of the logits
preds = np.argmax(preds, axis=1).tolist()
# Print the predicted labels
print(preds)
```
### 8 Bit quantization
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
# Define the model name or path for loading the LoRA fine-tuned model
model_name_or_path = "kimsan0622/Llama-3.2-3B-Code-Knowledge-Value-Eval-lora"
# Configure the model to load in 8-bit precision to optimize memory usage and speed
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
# Load the PEFT (Parameter-Efficient Fine-Tuning) configuration from the pre-trained model
config = PeftConfig.from_pretrained(model_path)
# Load the base model for sequence classification with 8-bit quantization and a device map to the first CUDA device
inference_model = AutoModelForSequenceClassification.from_pretrained(
config.base_model_name_or_path, # Base model path from PEFT config
quantization_config=bnb_config, # Apply 8-bit quantization for memory efficiency
device_map="cuda:0", # Map the model to the first CUDA device
label2id={str(k): k for k in range(6)}, # Map label names (0-5) to label IDs
id2label={k: str(k) for k in range(6)}, # Map label IDs to label names (0-5)
num_labels=6, # Specify the number of labels for classification
)
# Load the tokenizer associated with the base model
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Set the padding token if it's not defined, using the EOS token as the fallback
if not tokenizer.pad_token_id:
tokenizer.pad_token_id = tokenizer.eos_token_id
inference_model.config.pad_token_id = inference_model.config.eos_token_id
# Load the PEFT model by applying LoRA (Low-Rank Adaptation) on top of the base model
model = PeftModel.from_pretrained(inference_model, model_path)
```
## Training and evaluation data
[kimsan0622/code-knowledge-eval](https://huggingface.co/datasets/kimsan0622/code-knowledge-eval)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9527 | 0.9993 | 178 | 0.9795 | 0.5568 |
| 0.8867 | 1.9986 | 356 | 0.9173 | 0.5945 |
| 0.7937 | 2.9979 | 534 | 0.9297 | 0.5982 |
| 0.6486 | 3.9972 | 712 | 1.0171 | 0.5932 |
| 0.5555 | 4.9965 | 890 | 1.0441 | 0.5945 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.44.2
- Pytorch 2.3.0
- Datasets 2.20.0
- Tokenizers 0.19.1
## Test set results
### Confusion matrix
| y_true |**pred_0**|**pred_1**|**pred_2**|**pred_3**|**pred_4**|**pred_5**|
|-------|-------|-------|-------|-------|-------|-------|
| 0 | 1112 | 155 | 75 | 35 | 0 | 0 |
| 1 | 413 | 327 | 274 | 226 | 3 | 1 |
| 2 | 151 | 207 | 433 | 941 | 37 | 5 |
| 3 | 71 | 79 | 298 | 3490 | 928 | 59 |
| 4 | 14 | 1 | 24 | 1735 | 3334 | 1216 |
| 5 | 1 | 0 | 2 | 77 | 591 | 1917 |
### Classification reports
| y_true | **precision** | **recall** | **f1-score** | **support** |
|:-------------:|:-------------:|:----------:|:------------:|:-----------:|
| 0 | 0.63 | 0.81 | 0.71 | 1377 |
| 1 | 0.43 | 0.26 | 0.32 | 1244 |
| 2 | 0.39 | 0.24 | 0.30 | 1774 |
| 3 | 0.54 | 0.71 | 0.61 | 4925 |
| 4 | 0.68 | 0.53 | 0.59 | 6324 |
| 5 | 0.60 | 0.74 | 0.66 | 2588 |
| **accuracy** | | | 0.58 | 18232 |
| **macro avg**| 0.54 | 0.55 | 0.53 | 18232 |
| **weighted avg** | 0.58 | 0.58 | 0.57 | 18232 |
## 8 bit quantization
### Confusion matrix
| y_true |**pred_0**|**pred_1**|**pred_2**|**pred_3**|**pred_4**|**pred_5**|
|-------|-------|-------|-------|-------|-------|-------|
| 0 | 1107 | 169 | 71 | 29 | 1 | 0 |
| 1 | 396 | 359 | 273 | 213 | 2 | 1 |
| 2 | 143 | 234 | 418 | 938 | 36 | 5 |
| 3 | 66 | 97 | 301 | 3516 | 888 | 57 |
| 4 | 12 | 3 | 24 | 1799 | 3315 | 1171 |
| 5 | 1 | 0 | 2 | 81 | 610 | 1894 |
### Classification reports
| y_true | **precision** | **recall** | **f1-score** | **support** |
|:-------------:|:-------------:|:----------:|:------------:|:-----------:|
| 0 | 0.64 | 0.80 | 0.71 | 1377 |
| 1 | 0.42 | 0.29 | 0.34 | 1244 |
| 2 | 0.38 | 0.24 | 0.29 | 1774 |
| 3 | 0.53 | 0.71 | 0.61 | 4925 |
| 4 | 0.68 | 0.52 | 0.59 | 6324 |
| 5 | 0.61 | 0.73 | 0.66 | 2588 |
| **accuracy** | | | 0.58 | 18232 |
| **macro avg**| 0.54 | 0.55 | 0.54 | 18232 |
| **weighted avg** | 0.58 | 0.58 | 0.57 | 18232 |
|