rootxhacker commited on
Commit
a644f69
1 Parent(s): 2039166

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -126
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  base_model: mistralai/Mistral-7B-Instruct-v0.2
3
  library_name: peft
 
 
 
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -34,25 +38,7 @@ This Model is built based on Mistral-7B which take attack scenario as input and
34
  - **Paper [optional]:** [More Information Needed]
35
  - **Demo [optional]:** [More Information Needed]
36
 
37
- ## Uses
38
 
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
 
57
  [More Information Needed]
58
 
@@ -70,104 +56,53 @@ Users (both direct and downstream) should be made aware of the risks, biases and
70
 
71
  ## How to Get Started with the Model
72
 
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
 
120
- [More Information Needed]
 
 
121
 
122
- #### Metrics
 
 
 
123
 
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
 
128
- ### Results
 
 
 
129
 
130
- [More Information Needed]
 
 
131
 
132
- #### Summary
133
 
 
134
 
135
 
136
- ## Model Examination [optional]
 
 
 
 
137
 
138
- <!-- Relevant interpretability work for the model goes here -->
 
139
 
140
  [More Information Needed]
141
 
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
 
162
- [More Information Needed]
163
 
164
- #### Hardware
165
 
166
  [More Information Needed]
167
 
168
- #### Software
169
-
170
- [More Information Needed]
171
 
172
  ## Citation [optional]
173
 
@@ -182,31 +117,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
182
  }
183
 
184
 
185
- **BibTeX:**
186
-
187
- [More Information Needed]
188
-
189
- **APA:**
190
-
191
- [More Information Needed]
192
-
193
- ## Glossary [optional]
194
-
195
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
196
-
197
- [More Information Needed]
198
-
199
- ## More Information [optional]
200
-
201
- [More Information Needed]
202
-
203
- ## Model Card Authors [optional]
204
-
205
- [More Information Needed]
206
-
207
- ## Model Card Contact
208
-
209
- [More Information Needed]
210
- ### Framework versions
211
-
212
- - PEFT 0.11.2.dev0
 
1
  ---
2
  base_model: mistralai/Mistral-7B-Instruct-v0.2
3
  library_name: peft
4
+ datasets:
5
+ - tumeteor/Security-TTP-Mapping
6
+ language:
7
+ - en
8
  ---
9
 
10
  # Model Card for Model ID
 
38
  - **Paper [optional]:** [More Information Needed]
39
  - **Demo [optional]:** [More Information Needed]
40
 
 
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  [More Information Needed]
44
 
 
56
 
57
  ## How to Get Started with the Model
58
 
59
+ '''
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ import torch
62
+ from peft import PeftModel, PeftConfig
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
 
65
+ peft_model_id = "rootxhacker/mistralai-7B-attack2ttp"
66
+ config = PeftConfig.from_pretrained(peft_model_id)
67
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_4bit=True, device_map='auto')
68
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
69
 
70
+ def get_completion(query: str, model, tokenizer) -> str:
71
+ device = "cuda:0"
 
72
 
73
+ prompt_template = """
74
+ here is intruction you need to map Attack scenario with TTPs
75
+ ### Question:
76
+ {query}
77
 
78
+ ### Answer:
79
+ """
80
+ prompt = prompt_template.format(query=query)
81
 
82
+ encodeds = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
83
 
84
+ model_inputs = encodeds.to(device)
85
 
86
 
87
+ generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.eos_token_id)
88
+ decoded = tokenizer.batch_decode(generated_ids)
89
+ return (decoded[0])
90
+
91
+ '''
92
 
93
+ # Load the Lora model
94
+ model = PeftModel.from_pretrained(model, peft_model_id)
95
 
96
  [More Information Needed]
97
 
98
+ ## Training Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
+ ### Training Data
101
 
102
+ https://huggingface.co/datasets/tumeteor/Security-TTP-Mapping
103
 
104
  [More Information Needed]
105
 
 
 
 
106
 
107
  ## Citation [optional]
108
 
 
117
  }
118
 
119
 
120
+ **BibTeX:**