File size: 7,743 Bytes
79d8535 46f98a7 79d8535 f75c091 79d8535 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
---
license: apache-2.0
language:
- en
datasets:
- flytech/python-codes-25k
- jinaai/code_exercises
- kye/all-huggingface-python-code
- S-Dreamer/my-distiset-3be4288b
metrics:
- codeparrot/apps_metric
- code_eval
- f1
- accuracy
- rouge
pipeline_tag: text2text-generation
library_name: transformers
tags:
- code
---
<p align="left">
<img src="PyCodeT5.png" alt="Python AI Icon" width="25%">
</p>
# Model Card for PyCodeT5
CodeT5 Python Functions is a specialized variant of the CodeT5 model, fine-tuned for generating and understanding Python functions. It is designed to assist in transforming natural language descriptions into functional Python code, as well as optimizing existing code by applying Pythonic conventions and best practices. This model can generate function definitions, implement logical flows, and assist with debugging and refactoring Python code. It is ideal for developers, learners, and AI-powered programming assistants.
---
## Table of Contents
- [Model Card for PyCodeT5](#model-card-for-pycodet5)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary [optional]](#glossary-optional)
- [More Information [optional]](#more-information-optional)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
---
## Model Details
### Model Description
CodeT5 Python Functions is a specialized variant of the CodeT5 model, fine-tuned for generating and understanding Python functions. It is designed to assist in transforming natural language descriptions into functional Python code, as well as optimizing existing code by applying Pythonic conventions and best practices. This model can generate function definitions, implement logical flows, and assist with debugging and refactoring Python code. It is ideal for developers, learners, and AI-powered programming assistants.
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** apache-2.0
- **Parent Model:** More information needed
- **Resources for more information:**
- [GitHub Repo](https://github.com/Salesforce/CodeT5)
- [Associated Paper](2103.02720)
---
## Uses
### Direct Use
- **Generate Python Functions:** Convert natural language descriptions into functional Python code.
- **Optimize Python Code:** Apply Pythonic conventions and best practices to improve code quality.
- **Assist with Debugging and Refactoring:** Help users identify and fix issues in Python code.
### Downstream Use [Optional]
- **Integration with AI-powered programming assistants:** Use as a backend model for intelligent code completion or review tools.
### Out-of-Scope Use
- **Non-Python Code Generation:** This model is specifically trained for Python code generation and is not suitable for other languages.
- **Sensitive Applications:** It is not recommended to use this model in mission-critical systems or environments where safety or security is paramount.
---
## Bias, Risks, and Limitations
This model, like other large language models, may reflect biases present in the data used during training. For example, it may generate code that includes harmful stereotypes or unfair practices in certain contexts.
### Recommendations
- **Careful Use in Sensitive Domains:** When applying the model in high-risk or security-critical environments, extra validation and review processes should be in place.
- **Code Review:** Always ensure that code generated by this model undergoes thorough human review, especially in sensitive or production environments.
---
## Training Details
### Training Data
The model was fine-tuned on a dataset of Python code from various open-source repositories. It has been specifically trained to understand Python function structures and best practices.
### Training Procedure
- **Preprocessing:** The training data underwent standard preprocessing steps, such as tokenization and cleaning, to ensure quality input for fine-tuning.
- **Speeds, Sizes, Times:** More detailed information on training speed and times is needed for transparency.
---
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The testing data consists of Python code from a variety of open-source repositories and function-oriented tasks.
#### Factors
- **Task Complexity:** Evaluation includes both simple function generation and more complex refactoring tasks.
- **Code Quality:** Assessed based on the application of Pythonic principles like readability, clarity, and efficiency.
#### Metrics
- **Accuracy:** Measures the correctness of the generated code.
- **Code Quality:** Evaluates how well the generated code follows Pythonic best practices.
---
## Results
More information on the evaluation results is needed to fully assess the model’s performance.
---
## Model Examination
A detailed examination of the model's behavior, including edge cases, is needed to identify areas of improvement.
---
## Environmental Impact
- **Hardware Type:** More information needed
- **Cloud Provider:** More information needed
- **Carbon Emitted:** More information needed
---
## Technical Specifications [Optional]
### Model Architecture and Objective
The architecture is based on the Transformer model, optimized for code generation tasks.
### Compute Infrastructure
More details about the compute resources used in training and deployment are needed.
#### Hardware
More information needed.
#### Software
More information needed.
---
## Citation
**BibTeX:**
More information needed.
**APA:**
More information needed.
---
## Glossary [Optional]
More information needed.
---
## More Information [Optional]
More information needed.
---
## Model Card Authors [Optional]
S de Jager
---
## Model Card Contact
More information needed.
---
## How to Get Started with the Model
To get started, use the code below to load and use the PyCodeT5 model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the model and tokenizer
model_name = 'Salesforce/CodeT5-Python-functions'
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example input
input_text = "def sum(a, b):"
inputs = tokenizer(input_text, return_tensors="pt")
# Generate code
outputs = model.generate(**inputs)
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)
```
</details>
--- |