Menouar's picture
Update README.md
238a32b verified
|
raw
history blame
2.92 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - trl
  - sft
  - generated_from_trainer
  - falcon
base_model: tiiuae/falcon-7b
model-index:
  - name: falcon7b-linear-equations
    results: []
datasets:
  - Menouar/LinearEquations
language:
  - en

falcon7b-linear-equations

This model is a fine-tuned version of tiiuae/falcon-7b on a simple dataset of linear equations.

Model description

The objective of this model is to test Falcon7B's ability to solve mathematical linear equations after fine-tuning. The linear equations are in the form:

Ay + ay + b + B = Dy + dy + c + C

This model was trained using TRL, LoRA quantization, and Flash Attention.

Due to limited GPU resources, I only considered 20,000 samples for training.

For more information, check my Notebook.

Intended uses & limitations

The model can solve any equation of the form Ay + ay + b + B = Dy + dy + c + C with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.

Training and evaluation data

I will complete the evaluation data later, but for now, let’s show an example of a linear equation where this model finds the correct solution, unlike other models such as ChatGPT3.5, Bard, Llama 70B, and Mixtral:

Training procedure

For more information, check my Notebook.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 42
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 84
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • num_epochs: 3

Training results

The training results can be found here

Framework versions

  • PEFT 0.8.2.dev0
  • Transformers 4.38.0.dev0
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.1