--- library_name: transformers datasets: - LooksJuicy/ruozhiba --- ## 项目介绍 - 显卡:T4 - 模型:Llama3-8B - 数据集:LooksJuicy/ruozhiba - 微调方法:QLoRA ## 使用方法 ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "snowfly/Llama-3-8B-QLoRA-ruozhiba" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) text = "### Human:如何写代码? ### Assistant:" device = "cuda:0" inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```