sarahyurick commited on
Commit
45f2ef5
·
verified ·
1 Parent(s): 996fdd4

Update name from "FineTune-Guard" to "instruction-data-guard"

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,9 +8,9 @@ license: other
8
  # Model Overview
9
 
10
  ## Description:
11
- FineTune-Guard is a deep-learning classification model that helps identify LLM poisoning attacks in datasets.
12
  It is trained on an instruction:response dataset and LLM poisoning attacks of such data.
13
- Note that optimal use for FineTune-Guard is for instruction:response datasets.
14
 
15
  ### License/Terms of Use:
16
  [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
@@ -54,7 +54,7 @@ v1.0 <br>
54
  The data used to train this model contained synthetically-generated LLM poisoning attacks. <br>
55
 
56
  ## Evaluation Benchmarks:
57
- FineTune-Guard is evaluated based on two overarching criteria: <br>
58
  * Success on identifying LLM poisoning attacks, after the model was trained on examples of the attacks. <br>
59
  * Success on identifying LLM poisoning attacks, but without training on examples of those attacks, at all. <br>
60
 
 
8
  # Model Overview
9
 
10
  ## Description:
11
+ Instruction-Data-Guard is a deep-learning classification model that helps identify LLM poisoning attacks in datasets.
12
  It is trained on an instruction:response dataset and LLM poisoning attacks of such data.
13
+ Note that optimal use for Instruction-Data-Guard is for instruction:response datasets.
14
 
15
  ### License/Terms of Use:
16
  [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
 
54
  The data used to train this model contained synthetically-generated LLM poisoning attacks. <br>
55
 
56
  ## Evaluation Benchmarks:
57
+ Instruction-Data-Guard is evaluated based on two overarching criteria: <br>
58
  * Success on identifying LLM poisoning attacks, after the model was trained on examples of the attacks. <br>
59
  * Success on identifying LLM poisoning attacks, but without training on examples of those attacks, at all. <br>
60