Llama-3.1-Minitron-4B-Chat

This is an instruction-tuned version of nvidia/Llama-3.1-Minitron-4B-Width-Base that has underwent supervised fine-tuning with 64k instruction-response pairs from the teknium/OpenHermes-2.5 dataset on a single A100 40GB GPU.

How to use

Chat Format

Given the nature of the training data, the Llama-3.1-Minitron-4B chat model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follows:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
Question?<|im_end|>
<|im_start|>assistant

For example:

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
How to explain Internet for a medieval knight?<|im_end|>
<|im_start|>assistant

where the model generates the text after <|im_start|>assistant .

Sample inference code

Support for this model will be added in the upcoming transformers release. In the meantime, please install the library from source:

pip install git+https://github.com/huggingface/transformers

This code snippets show how to get quickly started with running the model on a GPU:

import torch 
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline 

torch.random.manual_seed(0) 

model_id = "rasyosef/Llama-3.1-Minitron-4B-Chat"
model = AutoModelForCausalLM.from_pretrained( 
    model_id,  
    device_map="auto",  
    torch_dtype=torch.bfloat16 
) 

tokenizer = AutoTokenizer.from_pretrained(model_id) 

messages = [ 
    {"role": "system", "content": "You are a helpful AI assistant."}, 
    {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, 
    {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, 
    {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, 
] 

pipe = pipeline( 
    "text-generation", 
    model=model, 
    tokenizer=tokenizer, 
) 

generation_args = { 
    "max_new_tokens": 256, 
    "return_full_text": False, 
    "temperature": 0.0, 
    "do_sample": False, 
} 

output = pipe(messages, **generation_args) 
print(output[0]['generated_text'])  

Note: If you want to use flash attention, call AutoModelForCausalLM.from_pretrained() with attn_implementation="flash_attention_2"

Downloads last month
354
Safetensors
Model size
4.51B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rasyosef/Llama-3.1-Minitron-4B-Chat

Finetuned
(5)
this model
Merges
1 model
Quantizations
3 models

Dataset used to train rasyosef/Llama-3.1-Minitron-4B-Chat

Collection including rasyosef/Llama-3.1-Minitron-4B-Chat