--- license: apache-2.0 datasets: - JetBrains/KStack results: - task: type: text-generation dataset: name: MultiPL-HumanEval (Kotlin) type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 29.19 tags: - code --- # KStack-full models KStack-full models is a collection of fine-tuned open-source generative text models fine-tuned on KStack dataset with rule-based filtering. This is a repository for fine-tuned CodeLlama-7b model in the Hugging Face Transformers format. # Model use ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load pre-trained model and tokenizer model_name = 'JetBrains/CodeLlama-7B-KStack-full' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda') # Create and encode input input_text = """\ This function takes an integer n and returns factorial of a number: fun factorial(n: Int): Int {\ """ input_ids = tokenizer.encode( input_text, return_tensors='pt' ).to('cuda') # Generate output = model.generate( input_ids, max_length=60, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id, ) # Decode output generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) ``` As with the base model, we can use FIM. To do this, the following format must be used: ``` '
' + prefix + '' + suffix + ' ' ``` # Training setup The model was trained on one A100 GPU with following hyperparameters: | **Hyperparameter** | **Value** | |:---------------------------:|:----------------------------------------:| | `warmup` | 5% | | `max_lr` | 1e-6 | | `num_epochs` | 1 | | 'attention_dropout' | 0.1 | | `scheduler` | cosine | | `total_batch_size` | 128 (~65K tokens per step) | | `num_epochs` | 1 | More details about finetuning can be found in the technical report # Data filtering To increase the quality of the dataset and filter out statistical outliers such as homework assignments, we filter out the dataset entries according to the following rules: * We filter out files which belong to the low-popular repos (the sum of stars and forks is less than 6) * Next, we filter out files which belong to the repos with less than 5 Kotlin files * Finally, we remove files which have less than 20 SLOC We clean the content of the remaining dataset entries according to the following rules: * We remove all non-ASCII entries * We remove all package lines such as _package kotlinx.coroutines.channels_ * We remove half of the import lines. # Evaluation To evaluate we used [Kotlin Humaneval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) Fine-tuned model: | **Model name** | **Kotlin HumanEval Pass Rate** | |:---------------------------:|:----------------------------------------:| | `base model` | 26.09 | | `fine-tuned model` | **29.19** | # Ethical Considerations and Limitations CodeLlama-7B-KStack-full and its variants are a new technology that carries risks with use. The testing conducted to date could not cover all scenarios. For these reasons, as with all LLMs, CodeLlama-7B-KStack-full's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of CodeLlama-7B-KStack-full, developers should perform safety testing and tuning tailored to their specific applications of the model.