Files changed (1) hide show
  1. README.md +23 -8
README.md CHANGED
@@ -1,10 +1,25 @@
1
  ---
2
- title: README
3
- emoji: 🏆
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: static
7
- pinned: false
8
  ---
9
-
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ {}
 
 
 
 
 
3
  ---
4
+ Welcome to join our family to learn together and promote the advancement of machine learning in China!
5
+ Now, let's start
6
+ {
7
+ 1. If you don't have a high-performance GPU, I recommend that you rent one.
8
+ There are cheap GPUs available for students on the Internet,and some are even free.
9
+ 2. Some students may not have a foundation in machine learning, but not need to be nervous.
10
+ If you just want to know how to use large models, it's still easy.
11
+ 3. follow the step, and you will have a basic understanding of the use of large models.<br>
12
+ "Tool": ||-python-||-pytorch-||-cuda-||-anaconda(miniconda)-||-pycharm(vscode)-||. I think it is easy for you, and there are many course on bilibili.<br>
13
+ "usage":<br>
14
+ >>first --- download "Transformer library","Tokenizer","Pretrained Model",and you can use Tsinghua-source(清华源) and hf-mirror to download them. <br>
15
+ >>second --- |"import"| -> |"Tokennizer"| -> |load "Pretrained-model"| -> |input your sentence or image| -> |"model.generate"|<br>
16
+ >>>>example: <br>
17
+ >>>>||----from transformers import GPT2Tokenizer, GPT2Model--||<br>
18
+ >>>>||----tokenizer = GPT2Tokenizer.from_pretrained('gpt2')------||<br>
19
+ >>>>||----model = GPT2Model.from_pretrained('gpt2')--------------||<br>
20
+ >>>>||----text = "Replace me by any text you'd like."------------------||<br>
21
+ >>>>||----encoded_input = tokenizer(text, return_tensors='pt')---||<br>
22
+ >>>>||----output = model.generate(encoded_input)------------------||<br>
23
+ "customized": It's not a easy job. But I can give a tips that you can start with Lora. Lora as PEFT is friendly for students. And there are other ways to fine-tune the model like prefix-tuning,P-tuning,RLHF,etc. Also you can try Data mounting.
24
+ }
25
+ Nothing is difficult to the man who will try!