For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
Doc / guide: https://huggingface.co/docs/hub/model-cards
Model Card for Canstralian/CyberAttackDetection
This model card provides details for the Canstralian/CyberAttackDetection model, fine-tuned from 'WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B.' The model is licensed under the MIT license and is designed for detecting and analyzing potential cyberattacks, primarily in the context of network security.
Model Details
Model Description
The Canstralian/CyberAttackDetection model is a machine learning-based cybersecurity tool developed for identifying and analyzing cyberattacks in real-time. Fine-tuned on datasets containing CVE (Common Vulnerabilities and Exposures) data and other OSINT resources, the model leverages advanced natural language processing capabilities to enhance threat intelligence and detection.
- Developed by: Canstralian
- Funded by: Self-funded
- Shared by: Canstralian
- Model type: NLP-based Cyberattack Detection
- Language(s) (NLP): English
- License: MIT License
- Finetuned from model: WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-70B
Model Sources
- Repository: Canstralian/CyberAttackDetection
- Demo: [More Information Needed]
Uses
Direct Use
The model can be used to:
- Identify and analyze network logs for potential cyberattacks.
- Enhance penetration testing efforts by detecting vulnerabilities in real-time.
- Support SOC (Security Operations Center) teams in threat detection and mitigation.
Downstream Use
The model can be fine-tuned further for:
- Specific industries or domains requiring custom threat analysis.
- Integration into SIEM (Security Information and Event Management) tools.
Out-of-Scope Use
The model is not suitable for:
- Malicious use or exploitation.
- Real-time applications requiring sub-millisecond inference speeds without optimization.
Bias, Risks, and Limitations
While the model is trained on comprehensive datasets, it may exhibit:
- Bias towards specific attack patterns not covered in the training data.
- False positives/negatives in detection, especially with ambiguous or novel attack methods.
- Limitations in non-English network logs or cybersecurity data.
Recommendations
Users should:
- Regularly update and fine-tune the model with new datasets to address emerging threats.
- Employ complementary tools for holistic cybersecurity measures.
How to Get Started with the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("canstralian/CyberAttackDetection")
model = AutoModelForCausalLM.from_pretrained("canstralian/CyberAttackDetection")
input_text = "Analyze network log: [Sample Log Data]"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
Training Data
The model is fine-tuned on:
- CVE datasets (e.g., known vulnerabilities and exploits).
- OSINT datasets focused on cybersecurity.
- Synthetic data generated to simulate diverse attack scenarios.
Training Procedure
Preprocessing
Data preprocessing involved:
- Normalizing logs to remove PII (Personally Identifiable Information).
- Filtering out redundant or irrelevant entries.
Training Hyperparameters
- Training regime: Mixed precision (fp16)
- Learning rate: 2e-5
- Batch size: 16
- Epochs: 5
Speeds, Sizes, Times
- Training time: ~72 hours on 4 A100 GPUs
- Model size: 70B parameters
- Checkpoint size: ~60GB
Evaluation
Testing Data, Factors & Metrics
Testing Data
The model was tested on:
- A subset of CVE datasets held out during training.
- Logs from simulated penetration testing environments.
Factors
- Attack types (e.g., DDoS, phishing, SQL injection).
- Domains (e.g., financial, healthcare).
Metrics
- Precision: 92%
- Recall: 89%
- F1 Score: 90.5%
Results
The model demonstrated robust performance across multiple attack scenarios, with minimal false positives in controlled environments.
Summary
The Canstralian/CyberAttackDetection model is effective for real-time threat detection in network security contexts, though further tuning may be required for specific use cases.
Environmental Impact
Carbon emissions for training were estimated using the Machine Learning Impact calculator:
- Hardware Type: A100 GPUs
- Hours used: 72
- Cloud Provider: AWS
- Compute Region: us-west-2
- Carbon Emitted: ~50 kg CO2eq
Technical Specifications
Model Architecture and Objective
The model utilizes the Llama-3.1 architecture, optimized for NLP tasks with a focus on cybersecurity threat analysis.
Compute Infrastructure
Hardware
- GPUs: NVIDIA A100 (4 GPUs)
- RAM: 512 GB
Software
- Transformers library by Hugging Face
- PyTorch
- Python 3.10
Citation
BibTeX:
@misc{canstralian2025cyberattackdetection,
author = {Canstralian},
title = {CyberAttackDetection},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/canstralian/CyberAttackDetection}
}
Glossary
- CVE: Common Vulnerabilities and Exposures
- OSINT: Open Source Intelligence
- SOC: Security Operations Center
- SIEM: Security Information and Event Management
Model Card Contact
For questions, please contact Canstralian.