|
--- |
|
language: |
|
- fr |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
inference: false |
|
tags: |
|
- LLM |
|
- llama |
|
- llama-2 |
|
--- |
|
|
|
<p align="center" width="100%"> |
|
<img src="https://huggingface.co/bofenghuang/vigogne-2-7b-chat/resolve/v2.0/logo_v2.jpg" alt="Vigogne" style="width: 30%; min-width: 300px; display: block; margin: auto;"> |
|
</p> |
|
|
|
# Vigogne-2-7B-Chat-V2.0: A Llama-2 based French chat LLM |
|
|
|
Vigogne-2-7B-Chat-V2.0 is a French chat LLM, based on [LLaMA-2-7B](https://ai.meta.com/llama), optimized to generate helpful and coherent responses in user conversations. |
|
|
|
Check out our [blog](https://github.com/bofenghuang/vigogne/blob/main/blogs/2023-08-17-vigogne-chat-v2_0.md) and [GitHub repository](https://github.com/bofenghuang/vigogne) for more information. |
|
|
|
**Usage and License Notices**: Vigogne-2-7B-Chat-V2.0 follows Llama-2's [usage policy](https://ai.meta.com/llama/use-policy). A significant portion of the training data is distilled from GPT-3.5-Turbo and GPT-4, kindly use it cautiously to avoid any violations of OpenAI's [terms of use](https://openai.com/policies/terms-of-use). |
|
|
|
## Changelog |
|
|
|
All previous versions are accessible through branches. |
|
|
|
- **V1.0**: Trained on 420K chat data. |
|
- **V2.0**: Trained on 520K data. Check out our [blog](https://github.com/bofenghuang/vigogne/blob/main/blogs/2023-08-17-vigogne-chat-v2_0.md) for more details. |
|
|
|
## Usage |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TextStreamer |
|
from vigogne.preprocess import generate_inference_chat_prompt |
|
|
|
model_name_or_path = "bofenghuang/vigogne-2-7b-chat" |
|
revision = "v2.0" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, revision=revision, padding_side="right", use_fast=False) |
|
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, revision=revision, torch_dtype=torch.float16, device_map="auto") |
|
|
|
streamer = TextStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True) |
|
|
|
|
|
def infer( |
|
utterances, |
|
system_message=None, |
|
temperature=0.1, |
|
top_p=1.0, |
|
top_k=0, |
|
repetition_penalty=1.1, |
|
max_new_tokens=1024, |
|
**kwargs, |
|
): |
|
prompt = generate_inference_chat_prompt(utterances, tokenizer, system_message=system_message) |
|
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) |
|
input_length = input_ids.shape[1] |
|
|
|
generated_outputs = model.generate( |
|
input_ids=input_ids, |
|
generation_config=GenerationConfig( |
|
temperature=temperature, |
|
do_sample=temperature > 0.0, |
|
top_p=top_p, |
|
top_k=top_k, |
|
repetition_penalty=repetition_penalty, |
|
max_new_tokens=max_new_tokens, |
|
eos_token_id=tokenizer.eos_token_id, |
|
pad_token_id=tokenizer.pad_token_id, |
|
**kwargs, |
|
), |
|
streamer=streamer, |
|
return_dict_in_generate=True, |
|
) |
|
generated_tokens = generated_outputs.sequences[0, input_length:] |
|
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) |
|
return generated_text |
|
|
|
|
|
user_query = "Expliquez la différence entre DoS et phishing." |
|
infer([[user_query, ""]]) |
|
``` |
|
|
|
You can utilize the Google Colab Notebook below for inferring with the Vigogne chat models. |
|
|
|
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_chat.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> |
|
|
|
## Limitations |
|
|
|
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers. |
|
|