File size: 5,753 Bytes
f475a95
 
 
 
 
 
 
ffdcf97
191713b
 
 
b5b06d0
 
191713b
 
 
 
 
 
 
 
 
 
 
 
 
 
e4bca24
ffdcf97
e4bca24
191713b
e4bca24
191713b
 
9230022
191713b
 
 
 
 
 
9230022
191713b
 
 
 
 
82f5414
9230022
 
 
 
 
 
 
 
 
 
 
82f5414
9230022
82f5414
9230022
 
 
 
 
 
 
 
 
82f5414
2cde19f
9230022
 
 
 
2cde19f
9230022
2cde19f
9230022
 
 
2cde19f
9230022
 
 
 
 
2cde19f
9230022
 
191713b
9230022
191713b
9230022
191713b
9230022
191713b
9230022
191713b
9230022
191713b
 
9230022
191713b
9230022
 
191713b
9230022
191713b
9230022
 
191713b
 
042f5d0
 
191713b
042f5d0
191713b
 
3c95c0e
 
 
e43b534
3c95c0e
 
 
ffdcf97
191713b
ffdcf97
191713b
ffdcf97
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
---
language:
- en
tags:
- langchain
- python
- yolov8
- vertexai
---
# Interplay-AppCoder a CodeGeneration LLM
**Iterate’s new top-performing Interplay-AppCoder LLM scores 2.9 on usefulness and 2.7 on functionality on the ICE Benchmark Test**


The world of LLMs is growing rapidly as Several new LLMs and finetunes are released daily by the open-source community, startups, and enterprises, as new models are invented to perform various novel tasks.

One of our Iterate.ai R&D Projects has been to experiment with several LLMs, then train a code generation LLM on the latest generative AI frameworks and libraries. Our goal was to generate working and updated code for generative AI projects that we build alongside our enterprise clients.

As part of the process, wWe have been fine-tuning CodeLlama -7B, 34B and Wizard Coder -15B, 34B. We combined that fine-tuning with our hand-coded dataset training on LangChain, YOLO V8, VertexAI and many other modern libraries which we use on a daily basis. We fine-tuned our work on top of WizardCoder-15B.

The result is Interplay-AppCoder LLM, a brand new high performing code generation model, which we are releasing on October 31, 2023.



## Model Details



- **Developed by:** [Iterate.ai](https://www.iterate.ai/)
- **Language(s) (NLP):** [Python,Langchain,yolov8,vertexai]
- **Finetuned from model :** [Wizardcoder-15B-v1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)

### Model Demo 


- **Demo :** [https://appcoder.interplay.iterate.ai/]



## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model is optimized for code generation and cannot be used as chat model.


## How to Get Started with the Model

Use the code below to get started with the model.
```
#import model from hugging face repository
import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
    HfArgumentParser,
    pipeline,
    logging
)
model_repo_id ="iterateai/Interplay-AppCoder"
```
#### Load the model in FP16
```
iterate_model = AutoModelForCausalLM.from_pretrained(
    model_repo_id,
    low_cpu_mem_usage=True,
    return_dict=True,
    torch_dtype=torch.float16,
    device_map={"": 0},
    trust_remote_code=True
)
#Note: You can quantize the model using bnb confi parameter to load the model in T4 GPU
```
```
### Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_repo_id, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
```

```
### Inferencing

logging.set_verbosity(logging.CRITICAL)
#Sample prompt
prompt = "Can you provide a python script that uses the YOLOv8 model from the Ultralytics library to detect people in an image, draw green bounding boxes around them, and then save the image?"

pipe = pipeline(task="text-generation", model=iterate_model, tokenizer=tokenizer, max_length=1024)
result = pipe(f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response:",temperature=0.1,do_sample=True)
print(result[0]['generated_text'])
```
## Sample demo notebook
[https://colab.research.google.com/drive/1USuNLFxLex-C5tLHYET_nQfpM4ALCbc5?usp=sharing#scrollTo=lNCZTBj1nBsJ]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

Dataset used for evaluation [https://drive.google.com/file/d/1R6DDyBhcR6TSUYFTgUosJxrvibkR1BHC/view]


#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Our CodeGeneration LLM was created and fine-tuned with a new and unique knowledge base. As such, we utilized the newly published ICE score benchmark methodology for evaluating the code generated by the Interplay-AppCoder LLM.

The ICE methodology provides metrics for Usefulness and Functional Correctness as a baseline for scoring code generation.

* Usefulness: addresses whether the code output from the model is clear, presented in logical order, and maintains human readability and whether it covers all functionalities of the problem statement after comparing it with the reference code.
* Functional Correctness: An LLM that has complex reasoning capabilities is utilized to conduct unit tests while considering the given question and the reference code.


We utilized GPT4 to measure the above metrics and provide a score from 0-4. This is the [test dataset](https://drive.google.com/file/d/1R6DDyBhcR6TSUYFTgUosJxrvibkR1BHC/view) and 
[Jupyter notebook](https://colab.research.google.com/drive/1USuNLFxLex-C5tLHYET_nQfpM4ALCbc5?usp=sharing) we used to perform the benchmark.

You can read more about the ICE methodology in this [paper](https://openreview.net/pdf?id=RoGZaCsGUW)


| Model Name	| Usefulness (0 - 4) | Functional Correctness (0 - 4) |
|:--------------|:-------------------|:-------------------------------|
|Interplay AppCoder|	2.968 | 	2.476| 
|Wizard Coder |	1.825	| 0.603 |



## Can you try it?

Yes, we’ve opened it up. Try out yourself right here:

* Can you provide a python script that uses the YOLOv8 model from the Ultralytics library to detect people in an image, draw green bounding boxes around them, and then save the image?
* Write a python code using langchain to do Question and Answering over a blog post.
* Write a python code using langchain library to retrieve information from SQL database and a vector store
* How can I set up clients for job service, model service, endpoint service, and prediction service using the Vertex AI client library in Python?