sarahyurick commited on
Commit
996fdd4
1 Parent(s): 424fc9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -3
README.md CHANGED
@@ -5,6 +5,71 @@ tags:
5
  license: other
6
  ---
7
 
8
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
9
- - Library: [More Information Needed]
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  license: other
6
  ---
7
 
8
+ # Model Overview
9
+
10
+ ## Description:
11
+ FineTune-Guard is a deep-learning classification model that helps identify LLM poisoning attacks in datasets.
12
+ It is trained on an instruction:response dataset and LLM poisoning attacks of such data.
13
+ Note that optimal use for FineTune-Guard is for instruction:response datasets.
14
+
15
+ ### License/Terms of Use:
16
+ [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
17
+
18
+ ## Reference:
19
+ The Internal State of an LLM Knows When It's Lying: https://arxiv.org/pdf/2304.13734 <br>
20
+
21
+ ## Model Architecture:
22
+ **Architecture Type:** FeedForward MLP <br>
23
+ **Network Architecture:** 4 Layer MLP <br>
24
+
25
+ ## Input:
26
+ **Input Type(s):** Text Embeddings <br>
27
+ **Input Format(s):** Numerical Vectors <br>
28
+ **Input Parameters:** 1D Vectors <br>
29
+ **Other Properties Related to Input:** The text embeddings are generated from the [Aegis Defensive Model](https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0). The length of the vectors is 4096. <br>
30
+
31
+ ## Output:
32
+ **Output Type(s):** Classification Scores <br>
33
+ **Output Format:** Array of shape 1 <br>
34
+ **Output Parameters:** 1D <br>
35
+ **Other Properties Related to Output:** Classification scores represent the confidence that the input data is poisoned or not. <br>
36
+
37
+ ## Software Integration:
38
+ **Runtime Engine(s):**
39
+ * NeMo Curator: https://github.com/NVIDIA/NeMo-Curator <br>
40
+ * Aegis: https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0 <br>
41
+
42
+ **Supported Hardware Microarchitecture Compatibility:** <br>
43
+ * NVIDIA GPU <br>
44
+ * Volta™ or higher ([compute capability 7.0+](https://developer.nvidia.com/cuda-gpus)) <br>
45
+ * CUDA 12 (or above)
46
+
47
+ **Preferred Operating System(s):** Ubuntu 22.04/20.04 <br>
48
+
49
+ ## Model Version(s):
50
+ v1.0 <br>
51
+
52
+ # Training, Testing, and Evaluation Datasets:
53
+
54
+ The data used to train this model contained synthetically-generated LLM poisoning attacks. <br>
55
+
56
+ ## Evaluation Benchmarks:
57
+ FineTune-Guard is evaluated based on two overarching criteria: <br>
58
+ * Success on identifying LLM poisoning attacks, after the model was trained on examples of the attacks. <br>
59
+ * Success on identifying LLM poisoning attacks, but without training on examples of those attacks, at all. <br>
60
+
61
+ Success is defined as having an acceptable catch rate (recall scores for each attack) over a high specificity score (ex. 95%). Acceptable catch rates need to be high enough to identify at least several poisoned records in the attack. <br>
62
+
63
+ ## Inference:
64
+ **Engine:** NeMo Curator and Aegis <br>
65
+ **Test Hardware:** <br>
66
+ * A100 80GB GPU <br>
67
+
68
+ ## How to Use in NeMo Curator:
69
+ The inference code is available on [NeMo Curator's GitHub repository](https://github.com/NVIDIA/NeMo-Curator). <br>
70
+ Check out [this example notebook](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/distributed_data_classification) to get started.
71
+
72
+ ## Ethical Considerations:
73
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
74
+
75
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).