File size: 2,535 Bytes
82d75a6
01108bc
82d75a6
 
 
 
 
 
 
 
 
0b651c7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
language:
- en
tags:
- code
size_categories:
- n<1K
---

# DafnyBench: A Benchmark for Formal Software Verification

Dataset & code for our paper [DafnyBench: A Benchmark for Formal Software Verification]()
<br>

## Overview ๐Ÿ“Š

DafnyBench is the largest benchmark of its kind for training and evaluating machine learning systems for formal software verification, with over 750 Dafny programs.
<br><br>


## Usage ๐Ÿ’ป

- <b>Dataset</b>: The dataset for DafnyBench (with ~782 programs) could be found in the `DafnyBench` directory, which contains the `ground_truth` set & the `hints_removed`set (with compiler hints, i.e. annoataions, removed).
- <b>Evaluation</b>: Evaluate LLMs on DafnyBench by asking models to fill in missing hints in a test file from the `hints_removed` set and checking if the reconstructed program could by verified by Dafny. Please refer to the `eval` directory.
<br>


<p align="center">
  <img src="assets/task_overview.jpg" width="600px"/>
</p>
<br><br>



## Set Up for Evaluation ๐Ÿ”ง

1. Install Dafny on your machine by following [this tutorial](https://dafny.org/dafny/Installation)
2. Clone & `cd` into this repository
3. Set up environment by running the following lines:
```
python -m venv stats
source stats/bin/activate
pip install -r requirements.txt
cd eval
```
4. Set up environment variable for the root directory:
```
export DAFNYBENCH_ROOT=
```
5. Set up environment variable for path to Dafny executable on your machine (for example, `/opt/homebrew/bin/Dafny`):
```
export DAFNY_PATH=
```
6. If you're evaluating an LLM through API access, set up API key. For example:
```
export OPENAI_API_KEY=
```
7. You can choose to evaluate an LLM on a single test program, such as:
```
python fill_hints.py --model "gpt-4o" --test_file "Clover_abs_no_hints.dfy" --feedback_turn 3 --dafny_path "$DAFNY_PATH"
```
or evaluate on the entire dataset:
```
export model_to_eval='gpt-4o'
./run_eval.sh
```
<br>


## Contents ๐Ÿ“

- `DafnyBench`
  - A collection of 782 Dafny programs. Each program has a `ground_truth` version that is fully verified with Dafny & a `hints_removed` version that has hints (i.e. annotations) removed
- `eval`
  - Contains scripts to evaluate LLMs on DafnyBench
- `results`
  - `results_summary` - Dataframes that summarize LLMs' success on every test program
  - `reconstructed_files` - LLM outputs with hints filled back in
  - `analysis` - Contains a notebook for analyzing the results
<br>