DafnyBench / README.md
CSquid333's picture
fill out the readme with github version
0b651c7 verified
|
raw
history blame
2.54 kB
metadata
license: apache-2.0
task_categories:
  - text-generation
  - text2text-generation
language:
  - en
tags:
  - code
size_categories:
  - n<1K

DafnyBench: A Benchmark for Formal Software Verification

Dataset & code for our paper DafnyBench: A Benchmark for Formal Software Verification

Overview πŸ“Š

DafnyBench is the largest benchmark of its kind for training and evaluating machine learning systems for formal software verification, with over 750 Dafny programs.

Usage πŸ’»

  • Dataset: The dataset for DafnyBench (with ~782 programs) could be found in the DafnyBench directory, which contains the ground_truth set & the hints_removedset (with compiler hints, i.e. annoataions, removed).
  • Evaluation: Evaluate LLMs on DafnyBench by asking models to fill in missing hints in a test file from the hints_removed set and checking if the reconstructed program could by verified by Dafny. Please refer to the eval directory.



Set Up for Evaluation πŸ”§

  1. Install Dafny on your machine by following this tutorial
  2. Clone & cd into this repository
  3. Set up environment by running the following lines:
python -m venv stats
source stats/bin/activate
pip install -r requirements.txt
cd eval
  1. Set up environment variable for the root directory:
export DAFNYBENCH_ROOT=
  1. Set up environment variable for path to Dafny executable on your machine (for example, /opt/homebrew/bin/Dafny):
export DAFNY_PATH=
  1. If you're evaluating an LLM through API access, set up API key. For example:
export OPENAI_API_KEY=
  1. You can choose to evaluate an LLM on a single test program, such as:
python fill_hints.py --model "gpt-4o" --test_file "Clover_abs_no_hints.dfy" --feedback_turn 3 --dafny_path "$DAFNY_PATH"

or evaluate on the entire dataset:

export model_to_eval='gpt-4o'
./run_eval.sh

Contents πŸ“

  • DafnyBench
    • A collection of 782 Dafny programs. Each program has a ground_truth version that is fully verified with Dafny & a hints_removed version that has hints (i.e. annotations) removed
  • eval
    • Contains scripts to evaluate LLMs on DafnyBench
  • results
    • results_summary - Dataframes that summarize LLMs' success on every test program
    • reconstructed_files - LLM outputs with hints filled back in
    • analysis - Contains a notebook for analyzing the results