--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python !pip install -U "huggingface_hub[cli]" !huggingface-cli login --token "************" --add-to-git-credential ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_path = "Ebrahimaabdelghfar/Ubuntu_assistant_Gemma2B" torch.backends.cuda.enable_mem_efficient_sdp(False) torch.backends.cuda.enable_flash_sdp(False) tokenizer = AutoTokenizer.from_pretrained(model_path,max_new_tokens=1000000) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "what sudo do?"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda'),max_length=1023,max_new_tokens=100) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) print(response) ```