yuan-yang commited on
Commit
d74486b
1 Parent(s): 2d70e4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -1,3 +1,38 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # LogicLLaMA Model Card
6
+
7
+ ## Model details
8
+
9
+ LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
10
+ It is trained by fine-tuning the LLaMA2-7B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
11
+
12
+ **Model type:**
13
+ This repo contains the LoRA delta weights for direct translation LogicLLaMA, which directly translates the NL statement into a FOL rule in one go.
14
+ We also provide the delta weights for other modes:
15
+ - [naive correction LogicLLaMA ](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0)
16
+
17
+ **License:**
18
+ Apache License 2.0
19
+
20
+ ## Using the model
21
+
22
+ Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
23
+
24
+
25
+ **Primary intended uses:**
26
+ LogicLLaMA is intended to be used for research.
27
+
28
+
29
+ ## Citation
30
+
31
+ ```
32
+ @article{yang2023harnessing,
33
+ title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
34
+ author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
35
+ journal={arXiv preprint arXiv:2305.15541},
36
+ year={2023}
37
+ }
38
+ ```