Wajdi1976 commited on
Commit
0dcff32
·
verified ·
1 Parent(s): 6a8bb66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -16
README.md CHANGED
@@ -12,22 +12,6 @@ language:
12
  datasets:
13
  - Yasbok/Alpaca_arabic_instruct
14
  ---
15
- First, Load the Model:
16
- from unsloth import FastLanguageModel
17
- import torch
18
- max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
19
- dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
20
- load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
21
-
22
-
23
- model, tokenizer = FastLanguageModel.from_pretrained(
24
- model_name = "Omartificial-Intelligence-Space/al-baka-4bit-llama3-8b",
25
- max_seq_length = max_seq_length,
26
- dtype = dtype,
27
- load_in_4bit = load_in_4bit,
28
- # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
29
- )
30
-
31
  # Uploaded model
32
 
33
  - **Developed by:** Wajdi1976
 
12
  datasets:
13
  - Yasbok/Alpaca_arabic_instruct
14
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # Uploaded model
16
 
17
  - **Developed by:** Wajdi1976