|
--- |
|
language: |
|
- de |
|
pipeline_tag: text-generation |
|
tags: |
|
- bloom |
|
- lora |
|
- LLM |
|
--- |
|
|
|
Github: https://github.com/abdullahalzubaer/bloom-6b4-clp-german-lora-inference |
|
|
|
Dataset used to train the adapter: |
|
|
|
See this thread for more details https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora/discussions/2 |
|
|
|
- yizhongw/self_instruct [Translated to German] |
|
- https://huggingface.co/datasets/yizhongw/self_instruct |
|
|
|
This lora adapter is from https://huggingface.co/asprenger/bloom-6b4-clp-german-instruct-lora. Thanks for the adapter! I did not train it. |
|
|
|
I thought I was uploading the complete bloom-6b4-clp-german model with the adapter that I made to work but then after pushing the model I realized that it was only the adapter. Still exploring how this PEFT works with LoRA works :) |
|
|
|
strict requirments for peft |
|
|
|
`peft==0.2.0` |
|
|
|
requirment |
|
|
|
`pip install transformers accelerate bitsandbytes peft==0.2.0` |
|
|
|
latest peft has breaking changes with the bloom-6b4-clp-german and this lora adapter, and the only way to get them both work is (I think) is to train the base model or the adapter |
|
again (I am not sure yet). |
|
|
|
Reference: |
|
- https://github.com/linhduongtuan/BLOOM-LORA/issues/5 |
|
- https://github.com/huggingface/peft/issues/276 |