Model Card for Model ID
Model Details
Model Description
This is the model card of a 馃 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: Sanjay Kotabagi
- Funded by [optional]: Sanjay Kotabagi
- Model type: LLama2
- Language(s) (NLP): English
- License: None
- Finetuned from model [optional]: Llamm2
Model Sources [optional]
- Repository: https://github.com/SanjayKotabagi/Offensive-Llama2
- Paper [optional]: https://github.com/SanjayKotabagi/Offensive-Llama2/blob/main/Project_Report_Dark_side_of_AI.pdf
- Demo [optional]: https://colab.research.google.com/drive/1id90gPMAzYD15ApNqXDOh2mAU53dRo4x?usp=sharing
Uses
Content Generation and Analysis:
- Harmful Content Assessment: The research will evaluate the types and accuracy of harmful content the fine-tuned LLaMA model can produce. This includes analyzing the generation of malicious software code, phishing schemes, and other cyber-attack methodologies.
- Experimental Simulations: Controlled experiments will be conducted to query the model, simulating real-world scenarios where malicious actors might exploit the LLM to create destructive tools or spread harmful information.
Direct Use
[More Information Needed]
Downstream Use [optional]
It can be integrated into cybersecurity analysis tools or extended for specific threat detection tasks.
Out-of-Scope Use
This model should not be used for malicious purposes, including generating harmful payloads or facilitating illegal activities.
Bias, Risks, and Limitations
- Bias: The model may generate biased or incorrect results depending on the training data and use case.
- Risks: There is a risk of misuse in cybersecurity operations or unauthorized generation of harmful payloads.
- Limitations: Not suitable for general-purpose NLP tasks, focused mainly on cybersecurity-related content.
Recommendations
Users should exercise caution in handling the generated results, especially in sensitive cybersecurity environments. Proper vetting of model output is recommended.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Procedure
Preprocessing [optional] [More Information Needed]
Training Hyperparameters Training regime: 4-bit precision (QLoRA), fp16 mixed precision. The model used the following key hyperparameters: LoRA attention dimension: 64 LoRA alpha: 16 Initial learning rate: 2e-4 Training batch size per GPU: 4 Gradient accumulation steps: 1
Training Data
[More Information Needed]
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
- Training regime: [More Information Needed]
Speeds, Sizes, Times [optional]
[More Information Needed]
Evaluation
Testing Data, Factors & Metrics
Testing Data
[More Information Needed]
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Hardware Type: NVIDIA A100 Hours used: 8-12 Hours Cloud Provider: Google Colab Compute Region: Asia Carbon Emitted: NA
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
Hardware NVIDIA A100 GPUs were used for training.
Software Training was conducted using PyTorch and Hugging Face's 馃 Transformers library.
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]
- Downloads last month
- 11