Llama3.2-3B-Esper2 / README.md
sequelbox's picture
card
ad469d7 verified
|
raw
history blame
3.87 kB
metadata
language:
  - en
pipeline_tag: text-generation
tags:
  - esper
  - esper-2
  - valiant
  - valiant-labs
  - llama
  - llama-3.2
  - llama-3.2-instruct
  - llama-3.2-instruct-3b
  - llama-3
  - llama-3-instruct
  - llama-3-instruct-3b
  - 3b
  - code
  - code-instruct
  - python
  - dev-ops
  - terraform
  - azure
  - aws
  - gcp
  - architect
  - engineer
  - developer
  - conversational
  - chat
  - instruct
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets:
  - sequelbox/Titanium
  - sequelbox/Tachibana
  - sequelbox/Supernova
model-index:
  - name: ValiantLabs/Llama3.2-3B-Esper2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-Shot)
          type: Winogrande
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 65.27
            name: acc
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: ARC Challenge (25-Shot)
          type: arc-challenge
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 43.17
            name: normalized accuracy
model_type: llama
license: llama3.2

image/jpeg

Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.2 3b.

  • Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more!
  • Real world problem solving and high quality code instruct performance within the Llama 3.2 Instruct chat format
  • Finetuned on synthetic DevOps-instruct and code-instruct data generated with Llama 3.1 405b.
  • Overall chat performance supplemented with generalist chat data.

Try our code-instruct AI assistant Enigma!

Version

This is the 2024-10-03 release of Esper 2 for Llama 3.2 3b.

Esper 2 is also available for Llama 3.1 8b!

Esper 2 will be coming to more model sizes soon :)

Prompting Guide

Esper 2 uses the Llama 3.2 Instruct prompt format. The example script below can be used as a starting point for general chat:

import transformers
import torch

model_id = "ValiantLabs/Llama3.2-3B-Esper2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are an AI assistant."},
    {"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"}
]

outputs = pipeline(
    messages,
    max_new_tokens=2048,
)

print(outputs[0]["generated_text"][-1])

The Model

Esper 2 is built on top of Llama 3.2 3b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.2 Instruct prompt style.

Our current version of Esper 2 is trained on DevOps data from sequelbox/Titanium, supplemented by code-instruct data from sequelbox/Tachibana and general chat data from sequelbox/Supernova.

image/jpeg

Esper 2 is created by Valiant Labs.

Check out our HuggingFace page for Shining Valiant 2, Enigma, and our other Build Tools models for creators!

We care about open source. For everyone to use.

We encourage others to finetune further from our models.