Menouar commited on
Commit
cd0927a
1 Parent(s): 44775b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -2
README.md CHANGED
@@ -40,13 +40,42 @@ Due to limited GPU resources, I only considered 20,000 samples for training.
40
 
41
  For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  ## Intended uses & limitations
45
  The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.
46
 
47
- ## Training and evaluation data
48
 
49
- I will compile the evaluation data at a later time. For the moment, I’d like to present an example of a linear equation. In this example, this model, Bard, and BingChat are able to find the correct solution. However, other models, including ChatGPT3.5, Llama 70B, Mixtral, and Falcon-7b-instruct, do not arrive at the correct solution.
50
  ```
51
  Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .
52
  ```
 
40
 
41
  For more information, check my [Notebook](https://colab.research.google.com/drive/1e8t5Cj6ZDAOc-z3bweWuBxF8mQZ9IPsH?usp=sharing).
42
 
43
+ ```python
44
+ import torch
45
+ from peft import AutoPeftModelForCausalLM
46
+ from transformers import AutoTokenizer, pipeline
47
+
48
+
49
+ # Specify the model ID
50
+ peft_model_id = "Menouar/falcon7b-linear-equations"
51
+
52
+ # Load Model with PEFT adapter
53
+ model = AutoPeftModelForCausalLM.from_pretrained(
54
+ peft_model_id,
55
+ device_map="auto",
56
+ torch_dtype=torch.float16
57
+ )
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
60
+
61
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
62
+
63
+ equation = "Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 ."
64
+
65
+ outputs = pipe(equation, max_new_tokens=172, do_sample=True, temperature=0.1, top_k=50, top_p=0.1,
66
+ eos_token_id=pipe.tokenizer.eos_token_id, pad_token_id=pipe.tokenizer.pad_token_id)
67
+
68
+ for seq in outputs:
69
+ print(f"{seq['generated_text']}")
70
+ ```
71
+
72
 
73
  ## Intended uses & limitations
74
  The model can solve any equation of the form ```Ay + ay + b + B = Dy + dy + c + C``` with integer coefficients ranging from -10 to 10. It cannot solve linear equations which have more constants than: A, a, b, B, c, C. It also cannot solve linear equations which have constants larger than 10 or smaller than -10. These limitations are due to the nature of the samples within the dataset and the ability of Large Language Models (LLMs) to perform simple computations between numbers. The goal of this work is to demonstrate that fine-tuning an LLM on a specific dataset can yield excellent results for handling a specific task, as is the case with our new model compared to the original one.
75
 
76
+ ## Evaluation
77
 
78
+ I will compile the evaluation section at a later time. For the moment, I’d like to present an example of a linear equation. In this example, this model, Bard, and BingChat are able to find the correct solution. However, other models, including ChatGPT3.5, Llama 70B, Mixtral, and Falcon-7b-instruct, do not arrive at the correct solution.
79
  ```
80
  Solve for y: 10 + 4y -9y +5 = 4 +8y - 2y + 8 .
81
  ```