Spaces:
Configuration error
Configuration error
metadata
{}
Welcome to join our family to learn together and promote the advancement of machine learning in China! Now, let's start {
- If you don't have a high-performance GPU, I recommend that you rent one. There are cheap GPUs available for students on the Internet,and some are even free.
- Some students may not have a foundation in machine learning, but not need to be nervous. If you just want to know how to use large models, it's still easy.
- follow the step, and you will have a basic understanding of the use of large models.
"Tool": ||-python-||-pytorch-||-cuda-||-anaconda(miniconda)-||-pycharm(vscode)-||. I think it is easy for you, and there are many course on bilibili.
"usage":
>>first --- download "Transformer library","Tokenizer","Pretrained Model",and you can use Tsinghua-source(清华源) and hf-mirror to download them.
>>second --- |"import"| -> |"Tokennizer"| -> |load "Pretrained-model"| -> |input your sentence or image| -> |"model.generate"|
>>>>example:
>>>>||----from transformers import GPT2Tokenizer, GPT2Model--||
>>>>||----tokenizer = GPT2Tokenizer.from_pretrained('gpt2')------||
>>>>||----model = GPT2Model.from_pretrained('gpt2')--------------||
>>>>||----text = "Replace me by any text you'd like."------------------||
>>>>||----encoded_input = tokenizer(text, return_tensors='pt')---||
>>>>||----output = model.generate(encoded_input)------------------||
"customized": It's not a easy job. But I can give a tips that you can start with Lora. Lora as PEFT is friendly for students. And there are other ways to fine-tune the model like prefix-tuning,P-tuning,RLHF,etc. Also you can try Data mounting. } Nothing is difficult to the man who will try!