Update README.md
Browse files
README.md
CHANGED
@@ -2,9 +2,33 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
#
|
6 |
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
# Training setup
|
10 |
|
@@ -20,7 +44,7 @@ The model was trained on one A100 GPU with following hyperparameters:
|
|
20 |
|
21 |
# Fine-tuning data
|
22 |
|
23 |
-
For this model we used 15K exmaples of Kotlin Exercices dataset. For more information about the dataset follow th link.
|
24 |
|
25 |
# Evaluation
|
26 |
|
@@ -28,10 +52,7 @@ To evaluate we used Kotlin Humaneval (more infromation here)
|
|
28 |
|
29 |
Fine-tuned model:
|
30 |
|
31 |
-
**Kotlin
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
**Kotlin Humaneval: 26.89**
|
37 |
-
**Kotlin Compleation: 0.388**
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
# Kexer models
|
6 |
|
7 |
+
Kexer models is a collection of fine-tuned open-source generative text models fine-tuned on Kotlin Exercices dataset.
|
8 |
+
This is a repository for fine-tuned CodeLlama-7b model in the Hugging Face Transformers format.
|
9 |
+
|
10 |
+
# Model use
|
11 |
+
|
12 |
+
```
|
13 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
14 |
+
|
15 |
+
# Load pre-trained model and tokenizer
|
16 |
+
model_name = 'JetBrains/CodeLlama-7B-Kexer' # Replace with the desired model name
|
17 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
18 |
+
model = AutoModelForCausalLM.from_pretrained(model_name).cuda()
|
19 |
+
|
20 |
+
# Encode input text
|
21 |
+
input_text = """This function takes an integer n and returns factorial of a number:
|
22 |
+
fun factorial(n: Int): Int {"""
|
23 |
+
input_ids = tokenizer.encode(input_text, return_tensors='pt').to('cuda')
|
24 |
+
|
25 |
+
# Generate text
|
26 |
+
output = model.generate(input_ids, max_length=150, num_return_sequences=1, no_repeat_ngram_size=2, early_stopping=True)
|
27 |
+
|
28 |
+
# Decode and print the generated text
|
29 |
+
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
30 |
+
print(generated_text)
|
31 |
+
```
|
32 |
|
33 |
# Training setup
|
34 |
|
|
|
44 |
|
45 |
# Fine-tuning data
|
46 |
|
47 |
+
For this model we used 15K exmaples of Kotlin Exercices dataset {TODO: link!}. For more information about the dataset follow th link.
|
48 |
|
49 |
# Evaluation
|
50 |
|
|
|
52 |
|
53 |
Fine-tuned model:
|
54 |
|
55 |
+
| **Model name** | **Kotlin HumanEval Pass Rate** | **Kotlin Completion** |
|
56 |
+
|:---------------------------:|:----------------------------------------:|:----------------------------------------:|
|
57 |
+
| `base model` | 26.89 | 0.388 |
|
58 |
+
| `fine-tuned model` | 42.24 | 0.344 |
|
|
|
|
|
|