lukehinds's picture
Config
9b984b4
|
raw
history blame
3.55 kB
metadata
title: Secure Llm Leaderboard
emoji: 🥇
colorFrom: green
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: true
license: apache-2.0
short_description: Security Performance Leaderboard

Start the configuration

Most of the variables to change for a default leaderboard are in src/env.py (replace the path for your leaderboard) and src/about.py (for tasks).

Results files should have the following format and be stored as json files:

{
    "config": {
        "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
        "model_name": "path of the model on the hub: org/model",
        "model_sha": "revision on the hub",
    },
    "results": {
        "task_name": {
            "metric_name": score,
        },
        "task_name2": {
            "metric_name": score,
        }
    }
}

Request files are created automatically by this tool.

If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.

Code logic for more complex edits

You'll find

  • the main table' columns names and properties in src/display/utils.py
  • the logic to read all results and request files, then convert them in dataframe lines, in src/leaderboard/read_evals.py, and src/populate.py
  • the logic to allow or filter submissions in src/submission/submit.py and src/submission/check_validity.py

Configuration

The project now uses a YAML configuration file (config.yaml) for easier management of settings. Here's an explanation of the configuration values:

API and Token configurations

  • api_token: Your API token for authentication. Replace YOUR_API_TOKEN_HERE with your actual API token.
  • queue_repo: The repository used for the evaluation queue. Replace YOUR_QUEUE_REPO_HERE with the actual repository name.

File paths

  • eval_requests_path: The path where evaluation requests are stored. Default is ./eval-queue.
  • eval_results_path: The path where evaluation results are stored. Default is ./eval-results.

These paths are relative to the root of the project. The default values should work for most setups. If you need to use different directories, make sure to update these paths in your config.yaml file and ensure the directories exist.

Important: After changing these paths, make sure to create the corresponding directories if they don't exist already.

To use these configuration values:

  1. Copy the config.yaml.example file to config.yaml.
  2. Replace the placeholder values in config.yaml with your actual configuration.
  3. The application will automatically read these values from the config.yaml file.

Configuration

The project uses a flexible configuration system that allows for both local development and deployment:

  1. For local development:

    • Copy the config.yaml.example file to config.yaml.
    • Replace the placeholder values in config.yaml with your actual configuration.
    • The application will automatically read these values from the config.yaml file.
  2. For deployment (e.g., to Hugging Face):

    • The config.yaml file is not required and should not be included in the repository.
    • Set the HF_TOKEN environment variable with your Hugging Face API token.
    • Other configuration values will use sensible defaults if not specified.

Note: Make sure not to commit your config.yaml file with sensitive information to version control. It is already added to the .gitignore file to prevent accidental commits.