datasets:
- tiiuae/falcon-refinedweb
inference: false
language:
- en
- de
- es
- fr
license: unknown
model_creator: Technology Innovation Institute
model_link: https://huggingface.co/tiiuae/falcon-180B-chat
model_name: Falcon 180B Chat
model_type: falcon
quantized_by: TheBloke
TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
Falcon 180B Chat - GPTQ
- Model creator: Technology Innovation Institute
- Original model: Falcon 180B Chat
Description
This repo contains GPTQ model files for Technology Innovation Institute's Falcon 180B Chat.
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
EXPERIMENTAL
These are experimental first GPTQs for Falcon 180B. They have not yet been tested.
Transformers version 4.33.0 is required.
In order to make them, a small change was needed to AutoGPTQ to add support for the new model_type name falcon
. You will need to merge this PR before you can attempt to load them in AutoGPTQ: https://github.com/PanQiWei/AutoGPTQ/pull/326
Once this change has made, they should be usable just like any other GPTQ model. You can try the example Transformers Python code later in this README, or try loading them directly from AutoGPTQ.
I believe you will need 2 x 80GB GPUs (or 4 x 48GB) to load the 4-bit models, and probably the 3-bit ones as well.
Assuming the quants finish OK (and if you're reading this message, they did!) I will test them during the day on 7th September and update this notice with my findings.
SPLIT FILES
Due to the HF 50GB file limit, and the fact that GPTQ does not currently support sharding, I have had to split the model.safetensors
file.
To join it:
Linux and macOS:
cat model.safetensors-split-* > model.safetensors && rm model.safetensors-split-*
Windows command line:
COPY /B model.safetensors.split-a + model.safetensors.split-b model.safetensors
del model.safetensors.split-a model.safetensors.split-b
Repositories available
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
- Technology Innovation Institute's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template: Falcon
User: {prompt}
Assistant:
Provided files and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the main
branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
Explanation of GPTQ parameters
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as
desc_act
. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
---|---|---|---|---|---|---|---|---|---|
main | 4 | None | Yes | 0.1 | wikitext | 2048 | 92.74 GB | No | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
gptq-3bit--1g-actorder_True | 3 | None | Yes | 0.1 | wikitext | 2048 | 70.54 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
How to download from branches
- In text-generation-webui, you can add
:branch
to the end of the download name, egTheBloke/Falcon-180B-Chat-GPTQ:gptq-3bit--1g-actorder_True
- With Git, you can clone a branch with:
git clone --single-branch --branch gptq-3bit--1g-actorder_True https://huggingface.co/TheBloke/Falcon-180B-Chat-GPTQ
- In Python Transformers code, the branch is the
revision
parameter; see below.
How to easily download and use this model in text-generation-webui.
Please make sure you're using the latest version of text-generation-webui.
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
- Click the Model tab.
- Under Download custom model or LoRA, enter
TheBloke/Falcon-180B-Chat-GPTQ
.
- To download from a specific branch, enter for example
TheBloke/Falcon-180B-Chat-GPTQ:gptq-3bit--1g-actorder_True
- see Provided Files above for the list of branches for each option.
- Click Download.
- The model will start downloading. Once it's finished it will say "Done".
- In the top left, click the refresh icon next to Model.
- In the Model dropdown, choose the model you just downloaded:
Falcon-180B-Chat-GPTQ
- The model will automatically load, and is now ready for use!
- If you want any custom settings, set them and then click Save settings for this model followed by Reload the Model in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file
quantize_config.json
.
- Once you're ready, click the Text Generation tab and enter a prompt to get started!
How to use this GPTQ model from Python code
Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ compiled from source with a patch.
pip3 install transformers>=4.33.0 optimum>=1.12.0
pip3 uninstall -y auto-gptq
git clone -b TB_Latest_Falcon https://github.com/TheBloke/AutoGPTQ
cd AutoGPTQ
pip3 install .
You then need to manually download the repo so it can be merged
I recommend using my fast download script
git clone https://github.com/TheBlokeAI/AIScripts
python3 AIScripts/hub_download.py TheBloke/Falcon-180B-Chat-GPTQ Falcon-180B-Chat-GPTQ --branch main # change branch if you want to use the 3-bit model instead
Now join the files
cd Falcon-180B-Chat-GPTQ
# Windows users: see the command to use in the Description at the top of this README
cat model.safetensors-split-* > model.safetensors && rm model.safetensors-split-*
And then finally you can run the following code
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "/path/to/Falcon-180B-Chat-GPTQ" # change this to the path you downloaded the model to
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''User: {prompt}
Assistant: '''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
Compatibility
The provided files have not yet been tested. They are expected to work with AutoGPTQ, or via Transformers, as long as Transformers 4.33.0 is installed, and AutoGPTQ is updated as described above.
Huggingface Text Generation Inference (TGI) is compatible with all GPTQ models, but hasn't yet been tested with these files. Let me know if it works!
Discord
For further support, and discussions on these models and AI in general, join us at:
Thanks, and how to contribute.
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko-Fi: https://ko-fi.com/TheBlokeAI
Special thanks to: Aemon Algiz.
Patreon special mentions: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieล, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, ้ฟๆ, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjรคreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Original model card: Technology Innovation Institute's Falcon 180B Chat
๐ Falcon-180B-Chat
Falcon-180B-Chat is a 180B parameters causal decoder-only model built by TII based on Falcon-180B and finetuned on a mixture of Ultrachat, Platypus and Airoboros. It is made available under the Falcon-180B TII License and Acceptable Use Policy.
Paper coming soon ๐
๐ค To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading this great blogpost from HF or this one from the release of the 40B!
Note that since the 180B is larger than what can easily be handled with transformers
+acccelerate
, we recommend using Text Generation Inference.
You will need at least 400GB of memory to swiftly run inference with Falcon-180B.
Why use Falcon-180B-chat?
- โจ You are looking for a ready-to-use chat/instruct model based on Falcon-180B.
- It is the best open-access model currently available, and one of the best model overall. Falcon-180B outperforms LLaMA-2, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard.
- It features an architecture optimized for inference, with multiquery (Shazeer et al., 2019).
- It is made available under a permissive license allowing for commercial use.
๐ฌ This is a Chat model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-180B.
๐ธ Looking for a smaller, less expensive model? Falcon-7B-Instruct and Falcon-40B-Instruct are Falcon-180B-Chat's little brothers!
๐ฅ Falcon LLMs require PyTorch 2.0 for use with transformers
!
Model Card for Falcon-180B-Chat
Model Details
Model Description
- Developed by: https://www.tii.ae;
- Model type: Causal decoder-only;
- Language(s) (NLP): English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
- License: Falcon-180B TII License and Acceptable Use Policy.
Model Source
- Paper: coming soon.
Uses
See the acceptable use policy.
Direct Use
Falcon-180B-Chat has been finetuned on a chat dataset.
Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Bias, Risks, and Limitations
Falcon-180B-Chat is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of Falcon-180B-Chat to develop guardrails and to take appropriate precautions for any production use.
How to Get Started with the Model
To run inference with the model in full bfloat16
precision you need approximately 8xA100 80GB or equivalent.
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-180b-chat"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Training Details
Falcon-180B-Chat is based on Falcon-180B.
Training Data
Falcon-180B-Chat is finetuned on a mixture of Ultrachat, Platypus and Airoboros.
The data was tokenized with the Falcon tokenizer.
Evaluation
Paper coming soon.
See the OpenLLM Leaderboard for early results.
Technical Specifications
Model Architecture and Objective
Falcon-180B-Chat is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:
- Positionnal embeddings: rotary (Su et al., 2021);
- Attention: multiquery (Shazeer et al., 2019) and FlashAttention (Dao et al., 2022);
- Decoder-block: parallel attention/MLP with a two layer norms.
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
Hyperparameter | Value | Comment |
---|---|---|
Layers | 80 | |
d_model |
14848 | |
head_dim |
64 | Reduced to optimise for FlashAttention |
Vocabulary | 65024 | |
Sequence length | 2048 |
Compute Infrastructure
Hardware
Falcon-180B-Chat was trained on AWS SageMaker, on up to 4,096 A100 40GB GPUs in P4d instances.
Software
Falcon-180B-Chat was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
Citation
Paper coming soon ๐. In the meanwhile, you can use the following information to cite:
@article{falcon,
title={The Falcon Series of Language Models:Towards Open Frontier Models},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
To learn more about the pretraining dataset, see the ๐ RefinedWeb paper.
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}