Update README.md
Browse files
README.md
CHANGED
@@ -110,6 +110,50 @@ With GenAI ORT->DML backend, we got below mentioned accuracy numbers on a deskto
|
|
110 |
## Inference:
|
111 |
We used GenAI ORT->DML backend for inference. The instructions to use this backend are given in readme.txt file available under Files section.
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
|
114 |
## Ethical Considerations:
|
115 |
|
|
|
110 |
## Inference:
|
111 |
We used GenAI ORT->DML backend for inference. The instructions to use this backend are given in readme.txt file available under Files section.
|
112 |
|
113 |
+
## Bias
|
114 |
+
Field | Response
|
115 |
+
:---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
|
116 |
+
Participation considerations from adversely impacted groups ([protected classes](https://www.senate.ca.gov/content/protected-classes)) in model design and testing: | None
|
117 |
+
Measures taken to mitigate against unwanted bias: | None
|
118 |
+
|
119 |
+
## Explainability
|
120 |
+
Field | Response
|
121 |
+
:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
|
122 |
+
Intended Application & Domain: | Game NPC Development
|
123 |
+
Model Type: | Generative Pre-Trained Transformer (GPT)
|
124 |
+
Intended User: | Enterprise developers building game NPCs.
|
125 |
+
Output: | Text String(s)
|
126 |
+
Describe how the model works: | Generates a response using the input text and context such as NPC background information.
|
127 |
+
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Not Applicable
|
128 |
+
Verified to have met prescribed NVIDIA quality standards: | Yes
|
129 |
+
Performance Metrics: | Accuracy, Latency, and Throughput
|
130 |
+
Potential Known Risks: | The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. This issue could be exacerbated without the use of the recommended prompt template. The model may also amplify biases and return toxic responses especially when prompted with toxic prompts. If you are going to use this model in an agentic workflow, validate that the imported packages are from a trusted source to ensure end-to-end security.
|
131 |
+
Technical Limitation : | The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
132 |
+
Licensing: | [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
|
133 |
+
|
134 |
+
## Privacy
|
135 |
+
Field | Response
|
136 |
+
:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
|
137 |
+
Generatable or reverse engineerable personal data? | None
|
138 |
+
Was consent obtained for any personal data used? | Not Applicable
|
139 |
+
Protected class data used to create this model? | Datasets used for fine-tuning did not introduce any personal data that did not exist in the base model.
|
140 |
+
How often is dataset reviewed? | Before Release
|
141 |
+
Is a mechanism in place to honor data subject right of access or deletion of personal data? | Not Applicable
|
142 |
+
If personal data collected for the development of the model, was it collected directly by NVIDIA? | Not Applicable
|
143 |
+
If personal data collected for the development of the model by NVIDIA, do you maintain or have access to disclosures made to data subjects? | Not Applicable
|
144 |
+
If personal data collected for the development of this AI model, was it minimized to only what was required? | Not Applicable
|
145 |
+
Is there provenance for all datasets used in training? | Yes
|
146 |
+
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
|
147 |
+
Is data compliant with data subject requests for data correction or removal, if such a request was made? | Not Applicable
|
148 |
+
|
149 |
+
## Safety
|
150 |
+
Field | Response
|
151 |
+
:---------------------------------------------------|:----------------------------------
|
152 |
+
Model Application(s): | NPC Conversation
|
153 |
+
Describe the life-critical impact (if present). | None Known
|
154 |
+
Use Case Restrictions: | Abide by [NVIDIA Open Model Community License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf)
|
155 |
+
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.
|
156 |
+
|
157 |
|
158 |
## Ethical Considerations:
|
159 |
|