Why the example prompt doesn't include prompt format?

#8
by fahadh4ilyas - opened

From the example here:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = 'google/datagemma-rig-27b-it'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map='auto',
    torch_dtype=torch.bfloat16,
)

input_text = 'What are some interesting trends in Sunnyvale spanning gender, age, race, immigration, health conditions, economic conditions, crime and education?'
inputs = tokenizer(input_text, return_tensors='pt').to('cuda')

outputs = model.generate(**inputs, max_new_tokens=4096)
answer = tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0].strip()
print(answer)

The input_text has no format. I thought it should be

input_text = '<bos><start_of_turn>user\nWhat are some interesting trends in Sunnyvale spanning gender, age, race, immigration, health conditions, economic conditions, crime and education?<end_of_turn>\n<start_of_turn>model\n'

So, what is the right input_text?

Google org

Hi @fahadh4ilyas ,

Without a Chat Template: This approach involves tokenizing the input text directly, without adding any context about roles or conversation structure. It’s simply a plain string without any explicit interaction framework.

With a Chat Template: This approach uses a chat template to assign roles and create a structured conversational context for the input. You can ensure the correct chat template is applied by using the tokenizer.apply_chat_template method.

Here’s an example that demonstrates the usage:

image.png

This ensures that the input is formatted according to the expected conversational structure when using models like instruction-tuned ones.

Thank you.

Will the response be different between input without chat template vs input with chat template? Which one is recommended to use?

Google org

Hi @fahadh4ilyas ,

Response is different. Go with chat template approach because it is more sophisticated and designed for conversations and tasks where roles and multi-turn dialogues are needed, while the standard tokenization is a basic approach suitable for one-off instructions. Could you please refer to this gist

Thank you.

Sign up or log in to comment