emre's picture
Create README.md
7bfc572
|
raw
history blame
1.82 kB
metadata
license: apache-2.0
datasets:
  - emre/llama-2-13b-code-chat
tags:
  - code

πŸ¦™πŸ’» CodeLlama

emre/llama-2-13b-code-chat is a Llama 2 version of CodeAlpaca.

πŸ”§ Training

This model is based on the llama-2-13b-chat-hf model, fine-tuned using QLoRA on the mlabonne/CodeLlama-2-20k dataset. It was trained on an Colab Pro+It was trained Colab Pro+. It is mainly designed for educational purposes, not for inference but can be used exclusively with BBVA Group, GarantiBBVA and its subsidiaries.

πŸ’» Usage

# pip install transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "emre/llama-2-13b-code-chat"
prompt = "Write Python code to generate an array with all the numbers from 1 to 100"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

sequences = pipeline(
    f'<s>[INST] {prompt} [/INST]',
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Ouput:

Here is a Python code to generate an array with all the numbers from 1 to 100:

β€…```
 numbers = []
 for i in range(1,101):
     numbers.append(i)
β€…```

This code generates an array with all the numbers from 1 to 100 in Python. It uses a loop that iterates over the range of numbers from 1 to 100, and for each number, it appends that number to the array 'numbers'. The variable 'numbers' is initialized to a list, and its length is set to 101 by using the range of numbers (0-99).