loubnabnl HF staff commited on
Commit
093dcff
1 Parent(s): 8280fac

update readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -21,12 +21,12 @@ This metric is used to evaluate code generation on the [APPS benchmark](https://
21
  You can load the metric and use it with the following commands:
22
  ```
23
  from evaluate import load
24
- glue_metric = load('loubnabnl/apps_metric')
25
  results = apps_metric.compute(predictions=generations)
26
  ```
27
 
28
  ### Inputs
29
- **generations** (list(str)): List of code generations, each sub-list corresponds to the generation for a problem in APPS dataset, the order of the samples in the dataset must be kept.
30
 
31
  ### Output Values
32
 
 
21
  You can load the metric and use it with the following commands:
22
  ```
23
  from evaluate import load
24
+ apps_metric = load('loubnabnl/apps_metric')
25
  results = apps_metric.compute(predictions=generations)
26
  ```
27
 
28
  ### Inputs
29
+ **generations** list(list(str)): List of code generations, each sub-list corresponds to the generations for a problem in APPS dataset, the order of the samples in the dataset must be kept (with respect to the difficulty level).
30
 
31
  ### Output Values
32