File size: 6,018 Bytes
12f2bcf e8c26b2 238a32b e8c26b2 e95a12b 2ea151c 9a86c83 b947656 e95a12b 2ea151c e95a12b 2ea151c e95a12b 2ea151c e95a12b 2ea151c debbb92 2ea151c debbb92 503f6b9 e95a12b 2ea151c e95a12b 986613c e95a12b 986613c e95a12b c71209d e95a12b 2ea151c e95a12b e06f1b9 e95a12b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- falcon
base_model: tiiuae/falcon-7b
model-index:
- name: falcon7b-linear-equations
results: []
datasets:
- Menouar/LinearEquations
language:
- en
pipeline_tag: text-generation
widget:
- text: "Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 ."
example_title: "Solve Linear Equations"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7b-linear-equations
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on a simple dataset of [linear equations](https://huggingface.co/datasets/Menouar/LinearEquations).
## Model description
The objective of this model is to test Falcon7B's ability to solve mathematical linear equations after fine-tuning. The linear equations are in the form:
```
Ay + ay + b + B = Dy + dy + c + C
```
This model was trained using TRL, LoRA quantization, and Flash Attention.
Due to limited GPU resources, I only considered 20,000 samples for training.
For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
## Intended uses & limitations
The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.
## Training and evaluation data
I will complete the evaluation data later, but for now,
let’s show an example of a linear equation where this model finds the correct solution, unlike other models such as ChatGPT3.5, Bard, Llama 70B, and Mixtral.
```
Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .
```
<strong>Mixtral 8x7b</strong><br>
Let's first simplify the equation by combining like terms:<br>
10 - 5y + 5 = 4 + 6y<br>
Now, let's get all the y terms on one side and constants on the other side. Subtract 6y from both sides:<br>
10 - 5y - 6y + 5 = 4 + 6y - 6y<br>
This simplifies to:<br>
15 - 11y = 4<br>
Next, subtract 15 from both sides to isolate the y term:<br>
15 - 15 - 11y = 4 - 15<br>
This simplifies to:<br>
-11y = -11<br>
Finally, divide both sides by -11 to solve for y:<br>
y = -11 / -11<br>
So,<br>
<strong><span style="color:red">
y = 1
</span></strong>.
<strong>Llma 70b</strong><br>
To solve for y, we need to isolate the y term on one side of the equation.<br>
First, we can combine the like terms:<br>
10 + 4y - 9y + 5 = 4 + 8y - 2y + 8<br>
This simplifies to:<br>
10 - 5y + 5 = 4 + 6y<br>
Next, we can subtract 4 from both sides:<br>
6 - 5y + 5 = 6y<br>
Now, we can isolate the y term by subtracting 5 from both sides:<br>
-5y + 5 = 6y - 5<br>
This simplifies to:<br>
-5y = 11<br>
Finally, we can divide both sides by -5 to solve for y:<br>
y = -11/5<br>
Therefore, the value of y is <strong><span style="color:red">
-11/5
</span></strong>.
<strong>ChatGPT3.5</strong><br>
To solve for y in the given equation 10 + 4y - 9y + 5 = 4 + 8y - 2y + 8, follow these steps:<br>
Combine like terms on each side: (4y - 9y + 5) + 10 = (6y + 12 + 8y - 2y)<br>
Combine the y terms and constants separately: -5y + 15 = 12y + 12<br>
Move all terms involving y to one side and the constants to the other side: -5y - 12y = 12 - 15<br>
Combine like terms: -17y = -3<br>
Divide by -17: y = 3/17<br>
Therefore, the solution to the equation 10 + 4y - 9y + 5 = 4 + 8y - 2y + 8 is <strong><span style="color:red">
y = 3/17
</span></strong>.
<strong>Bard</strong><br>
Sure, I'd be happy to help you solve for y. Here is the solution:<br>
10 + 4y - 9y +5 = 4 +8y - 2y + 8<br>
First, combine like terms on each side of the equation.<br>
-5y + 15 = 12 + 6y<br>
Next, subtract 6y from both sides of the equation.<br>
-11y + 15 = 12<br>
Finally, subtract 15 from both sides of the equation to isolate y.<br>
-11y = -3<br>
Divide both sides of the equation by -11 to find y.<br>
<strong><span style="color:green">
y = 0.2727
</span></strong>.
<strong>BingChat</strong><br>
Combine like terms on both sides:<br>
10 + 4y - 9y + 5 = 15 - 5y<br>
4 + 8y - 2y + 8 = 12 + 6y<br>
This gives us:<br>
15 - 5y = 12 + 6y<br>
Solving for y in one step:<br>
<strong><span style="color:green">
y = 3 / 11
</span></strong>.
<strong>Falcon7B-linear-equation</strong><br>
The equation is in the form of ay + b = dy + c where:<br>
a = 4 - 9 = -5<br>
b = 10 + 5 = 15<br>
d = 8 - 2 = 6<br>
c = 4 + 8 = 12<br>
The solution is y = (c - b)/(a - d) if a ≠ d <br>
12 - 15 = -3<br>
-5 - 6 = -11<br>
y = -3 / -11<br>
The fraction -3 / -11 = 3 / 11.<br>
The solution is <strong><span style="color:green">
y = 3 / 11
</span></strong>.
## Training procedure
For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 42
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
The training results can be found [here](https://huggingface.co/Menouar/falcon7b-linear-equations/tensorboard)
### Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |