|
--- |
|
license: cc-by-nc-4.0 |
|
datasets: |
|
- tatsu-lab/alpaca |
|
language: |
|
- en |
|
|
|
--- |
|
# Eluwa: A Conversational LoRA for Facebook's OPT 2.7b Architecture |
|
|
|
![logo](https://huggingface.co/BackyardLabs/Eluwa/resolve/main/ELUWA-LOGO.jpg "baaaaaaaaaaaa") |
|
|
|
Eluwa is a fine-tuned Low-Rank Adapter (LoRA) model for Facebook's OPT 2.7b. It is trained on the Stanford Alpaca dataset. |
|
The idea was that OPT 2.7 was too curt (and frankly, a bit of an asshole) for a model of its size, and that we could finetune it like Alpaca did to Llama. |
|
|
|
This repository contains the Eluwa 2.7b 2 epoch model, which represents a significant improvements in question-answering ability compared to the default OPT 2.7b model. |
|
Below are the results of Vicuna-style testing: 80 questions in various categories, with the responses rated by GPT-4. |
|
|
|
| Model | OPT 2.7b base | Eluwa 2.7b 1000 iter | Eluwa 2.7b 2 epoch | |
|
|----------------|---------------|----------------------|--------------------| |
|
| Generic | 22 | 44 | 57 | |
|
| Knowledge | 35 | 60 | 72 | |
|
| Roleplay | 29 | 38 | 58 | |
|
| Common sense | 20 | 48 | 50 | |
|
| Fermi | 4 | 28 | 23 | |
|
| Counterfactual | 5 | 24 | 23 | |
|
| Coding | 2 | 7 | 7 | |
|
| Math | 0 | 3 | 3 | |
|
| Writing | 8 | 19 | 19 | |
|
| Total | 125 | 271 | 312 | |
|
|
|
|
|
Response times are fast: on my GTX 1080ti + Ryzen 3600,it generates between 1.14 tokens/s and 3.77 tokens/s. |