Datasets:

Languages:
English
Size:
n<1K
DOI:
Libraries:
License:
evanlohn commited on
Commit
f047df5
1 Parent(s): 65e48a9
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -1,3 +1,14 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ First install [Lean 4](https://leanprover-community.github.io/get_started.html). Then clone this repo: The outer LeanSrc folder is a [Lean Project](https://leanprover-community.github.io/install/project.html). You can open that folder directly in VSCode and check that the proofs in `LeanSrc/Sorts.lean` type check.
5
+ The main CodeProps-Bench folder handles extracting the benchmark and calculating baselines.
6
+
7
+ After cloning the repo, you will need to install [Lean REPL](https://github.com/leanprover-community/repl). By default, our scripts expect the repl folder `to be directly inside the CodeProps-Bench folder. run `lake build` from within the `repl` folder.
8
+
9
+ The `extract.py` script is used only to create the json-formatted benchmark.
10
+
11
+ The `baseline.py` script contains the code we used to get our baseline results. It shows how to interact with Lean Repl programmatically, although some interactions are still somewhat buggy in that the repl will send i.e. an extra newline or weirdly formatted message that requires our script to restart the repl.
12
+ Regardless, if you would like to use our setup, We ran our baselines using [LLMStep](https://github.com/wellecks/llmstep). However, our code also includes a natural place to write your own function to generate tactics given the goal and file context (see `get\_tactics\_llmstep` in `baseline.py`). We modified the LLMStep server to return average suggestion log-probabilities per suggestion to implement best-first search; we will publish our fork of that soon as well.
13
+
14
+