vigogne-2-7b-chat / README.md
bofenghuang's picture
Update doc
336a168
|
raw
history blame
3.74 kB
metadata
language:
  - fr
pipeline_tag: text-generation
library_name: transformers
inference: false
tags:
  - LLM
  - llama
  - llama-2

Vigogne

Vigogne-2-7B-Chat-V2.0: A Llama-2 based French chat LLM

Vigogne-2-7B-Chat-V2.0 is a French chat LLM, based on LLaMA-2-7B, optimized to generate helpful and coherent responses in user conversations.

Check out our blog and GitHub repository for more information.

Usage and License Notices: Vigogne-2-7B-Chat-V2.0 follows Llama-2's usage policy. A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's terms of use.

Changelog

All previous versions are accessible through branches.

  • V1.0: Trained on 420K chat data.
  • V2.0: Trained on 520K data. Check out our blog for more details.

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer
from vigogne.preprocess import generate_inference_chat_prompt

model_name_or_path = "bofenghuang/vigogne-2-7b-chat"
revision = "v2.0"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, revision=revision, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, revision=revision, torch_dtype=torch.float16, device_map="auto")

streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)


def infer(
    utterances,
    system_message=None,
    temperature=0.1,
    top_p=1.0,
    top_k=0,
    repetition_penalty=1.1,
    max_new_tokens=1024,
    **kwargs,
):
    prompt = generate_inference_chat_prompt(utterances, tokenizer, system_message=system_message)
    input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
    input_length = input_ids.shape[1]

    generated_outputs = model.generate(
        input_ids=input_ids,
        generation_config=GenerationConfig(
            temperature=temperature,
            do_sample=temperature > 0.0,
            top_p=top_p,
            top_k=top_k,
            repetition_penalty=repetition_penalty,
            max_new_tokens=max_new_tokens,
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.pad_token_id,
            **kwargs,
        ),
        streamer=streamer,
        return_dict_in_generate=True,
    )
    generated_tokens = generated_outputs.sequences[0, input_length:]
    generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
    return generated_text


user_query = "Expliquez la différence entre DoS et phishing."
infer([[user_query, ""]])

You can utilize the Google Colab Notebook below for inferring with the Vigogne chat models.

Open In Colab

Limitations

Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.