Meforgers commited on
Commit
e3fefad
·
verified ·
1 Parent(s): 53fc189

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -19
README.md CHANGED
@@ -1,29 +1,38 @@
1
  ---
 
 
 
 
 
 
2
  license: apache-2.0
3
- language:
4
- - tr
5
- - en
6
- - es
7
  ---
8
- Can you use that LLM with that code;
9
 
 
 
 
 
 
 
 
10
  from unsloth import FastLanguageModel
11
- model, tokenizer = FastLanguageModel.from_pretrained(
12
- model_name = "Meforgers/Aixrav",
13
- max_seq_length = max_seq_length,
14
- dtype = dtype,
15
- load_in_4bit = load_in_4bit,
16
- )
17
- FastLanguageModel.for_inference(model)
18
 
19
  inputs = tokenizer(
20
  [
21
- alpaca_prompt.format(
22
- "Can u text me a basic code?", # instruction
23
- "", # input
24
- "", # output - leave this blank for generation!
25
- )
26
  ], return_tensors = "pt").to("cuda")
27
 
28
- outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
29
- tokenizer.batch_decode(outputs)
 
 
1
  ---
2
+ tags:
3
+ - Generative AI
4
+ - text-generation-inference
5
+ - text-generation
6
+ - peft
7
+ library_name: transformers
8
  license: apache-2.0
 
 
 
 
9
  ---
 
10
 
11
+ # Model Trained By Meforgers
12
+
13
+ This model was trained by Meforgers for the space and more projects.
14
+
15
+ # Usage
16
+
17
+ ```python
18
  from unsloth import FastLanguageModel
19
+ model, tokenizer = FastLanguageModel.from_pretrained(
20
+ model_name = "Meforgers/Aixrav",
21
+ max_seq_length = max_seq_length,
22
+ dtype = dtype,
23
+ load_in_4bit = load_in_4bit,
24
+ )
25
+ FastLanguageModel.for_inference(model)
26
 
27
  inputs = tokenizer(
28
  [
29
+ alpaca_prompt.format(
30
+ "Can u make basic python code?", # instruction side
31
+ "", # input
32
+ "", # output - leave this blank for generation!
33
+ )
34
  ], return_tensors = "pt").to("cuda")
35
 
36
+ outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
37
+ tokenizer.batch_decode(outputs)
38
+ ```