ichitaka's picture
Update README.md
03dd12a
|
raw
history blame
7.17 kB
metadata
datasets:
  - tiiuae/falcon-refinedweb
language:
  - en
inference: false
license: apache-2.0

✨ Falcon-40B-Instruct 8Bit

INFO: This model is the Falcon-40B-Instruct model quantized using bitsandbytes. This saves you around 40 GB of downloads, if you plan to quantize the model anyways. bitsandbytes quantization only supports the GPU, so this will only run with a GPU that can hold the full model.

Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.

Paper coming soon 😊.

Why use Falcon-40B-Instruct?

💬 This is an instruct model, which may not be ideal for further finetuning. If you are interested in building your own instruct/chat model, we recommend starting from Falcon-40B.

💸 Looking for a smaller, less expensive model? Falcon-7B-Instruct is Falcon-40B-Instruct's little brother!

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-40b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Model Card for Falcon-40B-Instruct

Model Details

Model Description

  • Developed by: https://www.tii.ae;
  • Model type: Causal decoder-only;
  • Language(s) (NLP): English and French;
  • License: Apache 2.0;
  • Finetuned from model: Falcon-7B.

Model Source

  • Paper: coming soon.

Uses

Direct Use

Falcon-40B-Instruct has been finetuned on a chat dataset.

Out-of-Scope Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.

Bias, Risks, and Limitations

Falcon-40B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.

Recommendations

We recommend users of Falcon-40B-Instruct to develop guardrails and to take appropriate precautions for any production use.

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "tiiuae/falcon-40b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Training Details

Training Data

Falcon-40B-Instruct was finetuned on a 150M tokens from Bai ze mixed with 5% of RefinedWeb data.

The data was tokenized with the Falcon-7B/40B tokenizer.

Evaluation

Paper coming soon.

See the OpenLLM Leaderboard for early results.

Technical Specifications

For more information about pretraining, see Falcon-40B.

Model Architecture and Objective

Falcon-40B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

The architecture is broadly adapted from the GPT-3 paper (Brown et al., 2020), with the following differences:

For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.

Hyperparameter Value Comment
Layers 60
d_model 8192
head_dim 64 Reduced to optimise for FlashAttention
Vocabulary 65024
Sequence length 2048

Compute Infrastructure

Hardware

Falcon-40B-Instruct was trained on AWS SageMaker, on 64 A100 40GB GPUs in P4d instances.

Software

Falcon-40B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)

Citation

Paper coming soon 😊.

License

Falcon-40B-Instruct is made available under the Apache 2.0 license.

Contact

[email protected]