File size: 1,455 Bytes
2b6a7e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
language:
- zh
thumbnail: >-
https://s3.amazonaws.com/moonup/production/uploads/1677459920577-63b8e3432adad59f41dc65f4.jpeg?w=200&h=200&f=face
tags:
- bloom
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
widget:
- text:问:真昼是谁?\n答:
---
# Bloom 7B1 LightNovel ZH_CN LoRa Finetuned
BigScience Large Open-science Open-access Multilingual Language Model with 7,1 billion parameters finetuned on Chinese Translation of Japanese LightNovel using LoRa from PEFT (?)
## Model Details
I just downloaded 50 LightNovels then finetuned the model on raw text.
Trained by Rorical
## Use
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "Rorical/bloom-7b1-lightnovel-lora"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto', cache_dir="cache")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, cache_dir="cache")
model = PeftModel.from_pretrained(model, peft_model_id, cache_dir="cache")
prompt = "你是谁?\n"
batch = tokenizer(prompt, return_tensors='pt').to("cuda")
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=150, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.decode(output_tokens[0], skip_special_tokens=True))
``` |