AlicanKiraz0 commited on
Commit
6111c93
·
verified ·
1 Parent(s): d41f315

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -2,13 +2,47 @@
2
  license: mit
3
  language:
4
  - en
5
- base_model: AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity
 
6
  pipeline_tag: text-classification
7
  tags:
8
- - llama-cpp
9
  - gguf-my-repo
 
 
 
 
10
  ---
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  # AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF
13
  This model was converted to GGUF format from [`AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity`](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) for more details on the model.
@@ -51,4 +85,4 @@ Step 3: Run inference through the main binary.
51
  or
52
  ```
53
  ./llama-server --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q8_0.gguf -c 2048
54
- ```
 
2
  license: mit
3
  language:
4
  - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-Coder-7B-Instruct
7
  pipeline_tag: text-classification
8
  tags:
 
9
  - gguf-my-repo
10
+ - pentest
11
+ - cybersecurity
12
+ - ethicalhacking
13
+ - informationsecurity
14
  ---
15
 
16
+ <img src="https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q4_K_M-GGUF/resolve/main/SenecaLLMxqwen2.5-7B.webp" width="1000" />
17
+
18
+ Curated and trained by Alican Kiraz
19
+
20
+ [![Linkedin](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://tr.linkedin.com/in/alican-kiraz)
21
+ ![X (formerly Twitter) URL](https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2FAlicanKiraz0)
22
+ ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCEAiUT9FMFemDtcKo9G9nUQ)
23
+
24
+ Links:
25
+ - Medium: https://alican-kiraz1.medium.com/
26
+ - Linkedin: https://tr.linkedin.com/in/alican-kiraz
27
+ - X: https://x.com/AlicanKiraz0
28
+ - YouTube: https://youtube.com/@alicankiraz0
29
+
30
+ SenecaLLM has been trained and fine-tuned for nearly one month—around 100 hours in total—using various systems such as 1x4090, 8x4090, and 3xH100, focusing on the following cybersecurity topics. Its goal is to think like a cybersecurity expert and assist with your questions. It has also been fine-tuned to counteract malicious use.
31
+
32
+ **It does not pursue any profit.**
33
+
34
+ Over time, it will specialize in the following areas:
35
+
36
+ - Incident Response
37
+ - Threat Hunting
38
+ - Code Analysis
39
+ - Exploit Development
40
+ - Reverse Engineering
41
+ - Malware Analysis
42
+
43
+ "Those who shed light on others do not remain in darkness..."
44
+
45
+
46
  # AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF
47
  This model was converted to GGUF format from [`AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity`](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
48
  Refer to the [original model card](https://huggingface.co/AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity) for more details on the model.
 
85
  or
86
  ```
87
  ./llama-server --hf-repo AlicanKiraz0/SenecaLLM_x_Qwen2.5-7B-CyberSecurity-Q8_0-GGUF --hf-file senecallm_x_qwen2.5-7b-cybersecurity-q8_0.gguf -c 2048
88
+ ```