File size: 1,353 Bytes
3de7dd2
8cd99a2
 
 
 
 
 
 
 
 
 
 
3de7dd2
8cd99a2
 
 
 
 
 
 
999e3b4
 
 
 
 
 
8cd99a2
 
b2f4e75
8cd99a2
b2f4e75
8cd99a2
b2f4e75
 
 
8cd99a2
b2f4e75
8cd99a2
 
 
 
 
 
 
 
b2f4e75
8cd99a2
 
 
b2f4e75
8cd99a2
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?
license: other
---

# Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).

# Usage

```python

  !pip install -U "huggingface_hub[cli]"
  !huggingface-cli login --token "************" --add-to-git-credential
```

```python


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_path = "Ebrahimaabdelghfar/Ubuntu_assistant_Gemma2B"
torch.backends.cuda.enable_mem_efficient_sdp(False)
torch.backends.cuda.enable_flash_sdp(False)

tokenizer = AutoTokenizer.from_pretrained(model_path,max_new_tokens=1000000)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "what sudo do?"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'),max_length=1023,max_new_tokens=100)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

print(response)
```