sanjaymk commited on
Commit
aff3167
•
1 Parent(s): 1698285

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -49
README.md CHANGED
@@ -1,17 +1,14 @@
1
- Here is your model card formatted according to your specified template:
2
 
3
  ---
4
-
5
- library_name: transformers
6
  tags: []
7
-
8
  ---
9
 
10
  # Model Card for Model ID
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
- This is a custom 🤗 transformers model fine-tuned for cybersecurity-related tasks, particularly for generating or analyzing Metasploit payloads.
15
 
16
  ## Model Details
17
 
@@ -19,33 +16,35 @@ This is a custom 🤗 transformers model fine-tuned for cybersecurity-related ta
19
 
20
  <!-- Provide a longer summary of what this model is. -->
21
 
22
- This model has been fine-tuned on a dataset focused on cybersecurity, specifically on Metasploit payloads. It is based on the LLaMA2 7B architecture and has been further adapted using QLoRA for more efficient parameterization and training. The model is designed to assist in analyzing and generating cybersecurity-related content.
23
 
24
- - **Developed by:** Sanjay
25
- - **Funded by [optional]:** N/A
26
- - **Shared by [optional]:** N/A
27
- - **Model type:** LLaMA2 7B QLoRA
28
  - **Language(s) (NLP):** English
29
- - **License:** Open-source (specify license)
30
- - **Finetuned from model [optional]:** georgesung/open_llama_7b_qlora_uncensored
31
 
32
  ### Model Sources [optional]
33
 
34
  <!-- Provide the basic links for the model. -->
35
 
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
 
 
43
 
44
  ### Direct Use
45
 
46
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
 
48
- This model can be directly used for tasks like generating or analyzing payloads, threat hunting, or cybersecurity data analysis.
49
 
50
  ### Downstream Use [optional]
51
 
@@ -61,11 +60,9 @@ This model should not be used for malicious purposes, including generating harmf
61
 
62
  ## Bias, Risks, and Limitations
63
 
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- - **Bias:** The model may generate biased or incorrect results depending on the training data and use case.
67
- - **Risks:** There is a risk of misuse in cybersecurity operations or unauthorized generation of harmful payloads.
68
- - **Limitations:** Not suitable for general-purpose NLP tasks, focused mainly on cybersecurity-related content.
69
 
70
  ### Recommendations
71
 
@@ -81,11 +78,24 @@ Use the code below to get started with the model.
81
 
82
  ## Training Details
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ### Training Data
85
 
86
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
87
 
88
- The training dataset consists of payload-related content from Metasploit. Documentation on data pre-processing and filtering is still needed.
89
 
90
  ### Training Procedure
91
 
@@ -95,14 +105,10 @@ The training dataset consists of payload-related content from Metasploit. Docume
95
 
96
  [More Information Needed]
97
 
 
98
  #### Training Hyperparameters
99
 
100
- - **Training regime:** 4-bit precision (QLoRA), fp16 mixed precision. The model used the following key hyperparameters:
101
- - LoRA attention dimension: 64
102
- - LoRA alpha: 16
103
- - Initial learning rate: 2e-4
104
- - Training batch size per GPU: 4
105
- - Gradient accumulation steps: 1
106
 
107
  #### Speeds, Sizes, Times [optional]
108
 
@@ -120,19 +126,19 @@ The training dataset consists of payload-related content from Metasploit. Docume
120
 
121
  <!-- This should link to a Dataset Card if possible. -->
122
 
123
- The evaluation data consists of unseen payloads and Metasploit-related content.
124
 
125
  #### Factors
126
 
127
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
128
 
129
- Performance was evaluated based on cybersecurity relevance and accuracy.
130
 
131
  #### Metrics
132
 
133
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
134
 
135
- Evaluation metrics include perplexity, domain-specific accuracy, and payload generation quality.
136
 
137
  ### Results
138
 
@@ -140,7 +146,7 @@ Evaluation metrics include perplexity, domain-specific accuracy, and payload gen
140
 
141
  #### Summary
142
 
143
- [More Information Needed]
144
 
145
  ## Model Examination [optional]
146
 
@@ -151,30 +157,36 @@ Evaluation metrics include perplexity, domain-specific accuracy, and payload gen
151
  ## Environmental Impact
152
 
153
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
 
 
154
 
155
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
156
-
157
- - **Hardware Type:** NVIDIA A100
158
- - **Hours used:** [More Information Needed]
159
- - **Cloud Provider:** [More Information Needed]
160
- - **Compute Region:** [More Information Needed]
161
- - **Carbon Emitted:** [More Information Needed]
162
 
163
  ## Technical Specifications [optional]
164
 
165
  ### Model Architecture and Objective
166
 
167
- Based on the LLaMA2 7B architecture, fine-tuned using QLoRA for enhanced cybersecurity capabilities.
168
 
169
  ### Compute Infrastructure
170
 
 
 
 
 
 
 
171
  #### Hardware
172
 
173
- NVIDIA A100 GPUs were used for training.
174
 
175
  #### Software
176
 
177
- Training was conducted using PyTorch and Hugging Face's 🤗 Transformers library.
178
 
179
  ## Citation [optional]
180
 
@@ -200,12 +212,8 @@ Training was conducted using PyTorch and Hugging Face's 🤗 Transformers librar
200
 
201
  ## Model Card Authors [optional]
202
 
203
- - **Author:** Sanjay
204
 
205
  ## Model Card Contact
206
 
207
- [More Information Needed]
208
-
209
- ---
210
-
211
- You can further customize the card by adding any additional information or links that are relevant to your project.
 
 
1
 
2
  ---
3
+ library_name: transformers
 
4
  tags: []
 
5
  ---
6
 
7
  # Model Card for Model ID
8
 
9
  <!-- Provide a quick summary of what the model is/does. -->
10
 
11
+
12
 
13
  ## Model Details
14
 
 
16
 
17
  <!-- Provide a longer summary of what this model is. -->
18
 
19
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
 
21
+ - **Developed by:** Sanjay Kotabagi
22
+ - **Funded by [optional]:** Sanjay Kotabagi
23
+ - **Model type:** LLama2
 
24
  - **Language(s) (NLP):** English
25
+ - **License:** None
26
+ - **Finetuned from model [optional]:** Llamm2
27
 
28
  ### Model Sources [optional]
29
 
30
  <!-- Provide the basic links for the model. -->
31
 
32
+ - **Repository:** https://github.com/SanjayKotabagi/Offensive-Llama2
33
+ - **Paper [optional]:** https://github.com/SanjayKotabagi/Offensive-Llama2/blob/main/Project_Report_Dark_side_of_AI.pdf
34
+ - **Demo [optional]:** https://colab.research.google.com/drive/1id90gPMAzYD15ApNqXDOh2mAU53dRo4x?usp=sharing
35
 
36
  ## Uses
37
 
38
+ Content Generation and Analysis:
39
+
40
+ - Harmful Content Assessment: The research will evaluate the types and accuracy of harmful content the fine-tuned LLaMA model can produce. This includes analyzing the generation of malicious software code, phishing schemes, and other cyber-attack methodologies.
41
+ - Experimental Simulations: Controlled experiments will be conducted to query the model, simulating real-world scenarios where malicious actors might exploit the LLM to create destructive tools or spread harmful information.
42
 
43
  ### Direct Use
44
 
45
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
46
 
47
+ [More Information Needed]
48
 
49
  ### Downstream Use [optional]
50
 
 
60
 
61
  ## Bias, Risks, and Limitations
62
 
63
+ - Bias: The model may generate biased or incorrect results depending on the training data and use case.
64
+ - Risks: There is a risk of misuse in cybersecurity operations or unauthorized generation of harmful payloads.
65
+ - Limitations: Not suitable for general-purpose NLP tasks, focused mainly on cybersecurity-related content.
 
 
66
 
67
  ### Recommendations
68
 
 
78
 
79
  ## Training Details
80
 
81
+ Training Procedure
82
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
83
+ Preprocessing [optional]
84
+ [More Information Needed]
85
+
86
+ Training Hyperparameters
87
+ Training regime: 4-bit precision (QLoRA), fp16 mixed precision. The model used the following key hyperparameters:
88
+ LoRA attention dimension: 64
89
+ LoRA alpha: 16
90
+ Initial learning rate: 2e-4
91
+ Training batch size per GPU: 4
92
+ Gradient accumulation steps: 1
93
+
94
  ### Training Data
95
 
96
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
97
 
98
+ [More Information Needed]
99
 
100
  ### Training Procedure
101
 
 
105
 
106
  [More Information Needed]
107
 
108
+
109
  #### Training Hyperparameters
110
 
111
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
112
 
113
  #### Speeds, Sizes, Times [optional]
114
 
 
126
 
127
  <!-- This should link to a Dataset Card if possible. -->
128
 
129
+ [More Information Needed]
130
 
131
  #### Factors
132
 
133
  <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
134
 
135
+ [More Information Needed]
136
 
137
  #### Metrics
138
 
139
  <!-- These are the evaluation metrics being used, ideally with a description of why. -->
140
 
141
+ [More Information Needed]
142
 
143
  ### Results
144
 
 
146
 
147
  #### Summary
148
 
149
+
150
 
151
  ## Model Examination [optional]
152
 
 
157
  ## Environmental Impact
158
 
159
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
160
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
161
+ Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
162
 
163
+ Hardware Type: NVIDIA A100
164
+ Hours used: 8-12 Hours
165
+ Cloud Provider: Google Colab
166
+ Compute Region: Asia
167
+ Carbon Emitted: NA
 
 
168
 
169
  ## Technical Specifications [optional]
170
 
171
  ### Model Architecture and Objective
172
 
173
+ [More Information Needed]
174
 
175
  ### Compute Infrastructure
176
 
177
+ Hardware
178
+ NVIDIA A100 GPUs were used for training.
179
+
180
+ Software
181
+ Training was conducted using PyTorch and Hugging Face's 🤗 Transformers library.
182
+
183
  #### Hardware
184
 
185
+ [More Information Needed]
186
 
187
  #### Software
188
 
189
+ [More Information Needed]
190
 
191
  ## Citation [optional]
192
 
 
212
 
213
  ## Model Card Authors [optional]
214
 
215
+ [More Information Needed]
216
 
217
  ## Model Card Contact
218
 
219
+ [More Information Needed]