|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- de |
|
- es |
|
- fr |
|
tags: |
|
- sft |
|
inference: false |
|
datasets: |
|
- OpenAssistant/oasst1 |
|
--- |
|
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]() |
|
|
|
## I am still building the structure of these descriptions. |
|
|
|
These will contain increasingly more content to help find the best models for a purpose. |
|
|
|
# falcon-40b-sft-top1-560 - GGUF |
|
- Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant) |
|
- Original model: [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560) |
|
|
|
|
|
|
|
# About GGUF format |
|
|
|
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. |
|
A growing list of Software is using it and can therefore use this model. |
|
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov |
|
|
|
# Quantization variants |
|
|
|
There is a bunch of quantized files available. How to choose the best for you: |
|
|
|
# legacy quants |
|
|
|
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. |
|
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. |
|
Falcon 7B models cannot be quantized to K-quants. |
|
|
|
# K-quants |
|
|
|
K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance. |
|
So, if possible, use K-quants. |
|
With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences. |
|
|
|
|
|
|
|
# Original Model Card: |
|
# Open-Assistant Falcon 40B SFT OASST-TOP1 Model |
|
|
|
This model is a fine-tuning of TII's [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) LLM. |
|
It was trained with top-1 (high-quality) demonstrations of the OASST data set (exported on May 6, 2023) with an effective batch size of 144 for ~7.5 epochs with LIMA style dropout (p=0.3) and a context-length of 2048 tokens. |
|
|
|
## Model Details |
|
|
|
- **Finetuned from:** [tiiuae/falcon-40b]((https://huggingface.co/tiiuae/falcon-40b) |
|
- **Model type:** Causal decoder-only transformer language model |
|
- **Language:** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); |
|
- **Demo:** [Continuations for 250 random prompts](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Fchat-gpt%2F2023-04-11_gpt-3.5-turbo_lottery.json%0Ahttps%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-06-03_OpenAssistant_falcon-40b-sft-top1-560_sampling_noprefix2.json) |
|
- **Eval results:** [ilm-eval](https://tju01.github.io/ilm-eval/) |
|
- **Weights & Biases**: [Training log](https://wandb.ai/open-assistant/public-sft/runs/3lr77x4h) (Checkpoint: 560 steps) |
|
- **License:** Apache 2.0 |
|
- **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) |
|
|
|
|
|
## Prompting |
|
|
|
Two special tokens are used to mark the beginning of user and assistant turns: |
|
`<|prompter|>` and `<|assistant|>`. Each turn ends with a `<|endoftext|>` token. |
|
|
|
Input prompt example: |
|
``` |
|
<|prompter|>What is a meme, and what's the history behind this word?<|endoftext|><|assistant|> |
|
``` |
|
The input ends with the `<|assistant|>` token to signal that the model should |
|
start generating the assistant reply. |
|
|
|
## Configuration Details |
|
|
|
Model: |
|
``` |
|
falcon-40b: |
|
dtype: bf16 |
|
log_dir: "falcon_log_40b" |
|
learning_rate: 5e-6 |
|
model_name: "tiiuae/falcon-40b" |
|
deepspeed_config: configs/zero3_config_falcon.json |
|
output_dir: falcon |
|
weight_decay: 0.0 |
|
max_length: 2048 |
|
warmup_steps: 20 |
|
gradient_checkpointing: true |
|
gradient_accumulation_steps: 1 |
|
per_device_train_batch_size: 18 |
|
per_device_eval_batch_size: 10 |
|
eval_steps: 80 |
|
save_steps: 80 |
|
num_train_epochs: 8 |
|
save_total_limit: 4 |
|
use_flash_attention: false |
|
residual_dropout: 0.3 |
|
residual_dropout_lima: true |
|
sort_by_length: false |
|
save_strategy: steps |
|
``` |
|
|
|
Dataset: |
|
``` |
|
oasst-top1: |
|
datasets: |
|
- oasst_export: |
|
lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" # sft-8.0 |
|
input_file_path: 2023-05-06_OASST_labels.jsonl.gz |
|
val_split: 0.05 |
|
top_k: 1 |
|
```<center> |
|
[![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io) |
|
[![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911) |
|
[![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht) |
|
[![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht) |
|
[![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966) |
|
</center> |