File size: 6,591 Bytes
f6fdc0e
 
 
 
 
 
 
 
 
 
 
4fd8155
73822d8
 
 
f467f4f
da3f01b
73822d8
 
 
 
 
 
 
 
f6fdc0e
407ee6b
 
be39cb2
407ee6b
2efdb03
407ee6b
be39cb2
 
2efdb03
407ee6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be39cb2
407ee6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be39cb2
407ee6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f6fdc0e
6d017b8
 
 
 
f6fdc0e
 
 
 
 
73822d8
 
 
 
f6fdc0e
73822d8
 
 
f6fdc0e
 
73822d8
 
 
f6fdc0e
73822d8
f6fdc0e
 
 
73822d8
f6fdc0e
 
 
 
73822d8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
license: other
---

# LoRA Model: Fine-Tuned for Argument to Propositional Logic Structure

Model Description

This model is a LoRA (Low-Rank Adaptation) fine-tuned version of the LLaMA 3.1 8B Instruct model. It has been specifically trained to convert natural language arguments into structured logical forms (propositional logic), including premises, conclusions, and formal symbolic proofs.
Dataset: The model was trained on a custom dataset designed to map natural language arguments to logical structures. For more details, visit the dataset link: https://huggingface.co/datasets/ergotts/propositional-logic.

Training Details

	β€’	Base Model: LLaMA 3.1 8B Instruct
	β€’	LoRA Rank: 16
	β€’	LoRA Alpha: 32
	β€’	Epochs: 15
	β€’	Batch Size: 2

# Example Usage

## For the following natural language argument:

"Eating a balanced diet and exercising regularly are key to maintaining good health. If you maintain good health, you'll have more energy throughout the day. John has been eating a balanced diet and exercising regularly for the past few months. Therefore, it's likely that John now has more energy throughout his day."

This model was able to accurately break down the argument into its premises and conclusion, and give a complete proof in propositional logic form, citing lines and inference rules.

## This model output:

Premise 1: If someone eats a balanced diet and exercises regularly, they will maintain good health. Premise 2: John eats a balanced diet and exercises regularly. Conclusion: Therefore, John has more energy throughout his day.
 
Natural language proof: If a person eats a balanced diet and exercises regularly, they will maintain good health. We know that John eats a balanced diet and exercises regularly. Therefore, we can conclude that John maintains good health. From this, we can further deduce that John has more energy throughout his day. Thus, we can conclude that John has more energy throughout his day. 
I will use the following mapping for the premises: 
The letter 'a' represents the premise 'A person eats a balanced diet'.
The letter 'b' represents the premise 'A person exercises regularly'.
The letter 'c' represents the premise 'A person maintains good health'.
The letter 'd' represents the premise 'A person has more energy throughout his/her day'.
 
Propositional logic proof: 
1. Original Premise: (a ∧ b) β†’ c
2. Original Premise: a ∧ b
3. Derived Step: c (From 1 and 2, Modus Ponens)
4. Original Premise: c β†’ d
5. Conclusion: d (From 3 and 4, Modus Ponens)


## For the same prompt, here is how Llama 3.1 8b Instruct performed:

The argument can be broken down as follows:

Premise 1: EBD ∧ ER β†’ G (If you eat a balanced diet and exercise regularly, you'll maintain good health.)
Premise 2: G β†’ ME (If you maintain good health, you'll have more energy throughout the day.)
Premise 3: EBD ∧ ER (John has been eating a balanced diet and exercising regularly for the past few months.)

Conclusion: ∴ MEL (It's likely that John now has more energy throughout his day.)

Here is the full proof in propositional logic form:

1. EBD ∧ ER β†’ G
2. G β†’ ME
3. EBD ∧ ER
4. ⊒ (EBD ∧ ER) ∧ (G β†’ ME)
5. ⊒ ((EBD ∧ ER) β†’ G) ∧ (G β†’ ME)
6. ⊒ (EBD ∧ ER) β†’ (G β†’ ME)
7. ⊒ (EBD ∧ ER) β†’ ((EBD ∧ ER) β†’ ME)
8. ⊒ ((EBD ∧ ER) β†’ ME)

The final answer is ∴ MEL, or "It's likely that John now has more energy throughout his day." 

## And here is how Claude 3.5 Sonnet performed:

Premises:
P1: Eating a balanced diet and exercising regularly are key to maintaining good health.
P2: If you maintain good health, you'll have more energy throughout the day.
P3: John has been eating a balanced diet and exercising regularly for the past few months.
Conclusion:
C: It's likely that John now has more energy throughout his day.
Mapping of premises to letters for propositional logic:
A: Eating a balanced diet and exercising regularly
B: Maintaining good health
C: Having more energy throughout the day
J: John has been eating a balanced diet and exercising regularly
Natural Language Proof:

Eating a balanced diet and exercising regularly are key to maintaining good health. (Premise 1)
If you maintain good health, you'll have more energy throughout the day. (Premise 2)
John has been eating a balanced diet and exercising regularly for the past few months. (Premise 3)
Since John has been eating a balanced diet and exercising regularly, he is likely maintaining good health. (From 1 and 3)
Since John is likely maintaining good health, it's likely he has more energy throughout the day. (From 2 and 4)
Therefore, it's likely that John now has more energy throughout his day. (Conclusion)

Propositional Logic Proof:

A β†’ B (Premise 1)
B β†’ C (Premise 2)
J (Premise 3)
J β†’ A (Definition of J)
J ∧ (J β†’ A) (Conjunction of 3 and 4)
A (Modus Ponens on 4 and 5)
A ∧ (A β†’ B) (Conjunction of 1 and 6)
B (Modus Ponens on 1 and 7)
B ∧ (B β†’ C) (Conjunction of 2 and 8)
C (Modus Ponens on 2 and 9)

Therefore, C (John likely has more energy throughout his day)

## Conclusion

This model does the best at extracting premises and conclusions and creating an accurate proof. The Llama 3.1 8b Instruct model extracts the premises, but fails miserably at the proof. Claude 3.5 Sonnet struggles with the nuance of connecting our specific character - "John" - to the prior premises, and introduces extraneous lines in the prompt.

# Usage

```python

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

base_model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
adapter_model = "ergotts/llama_3.1_8b_prop_logic_ft"

model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
tokenizer = AutoTokenizer.from_pretrained(base_model)


model = model.to("cuda")
model.eval()

messages = [
    {"role": "user", "content": "Hi!"},
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'),max_new_tokens=1000)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)

```

# Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).