Add model card and metadata
Browse filesThis PR adds a model card with relevant metadata, including the `pipeline_tag`, `library_name`, and license information. It also includes links to the paper and the Github repository. Note that this model is used for generating adversarial attacks against image classification models, rather than performing image classification directly.
README.md
CHANGED
@@ -1 +1,13 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: image-classification
|
3 |
+
license: mit
|
4 |
+
library_name: pytorch
|
5 |
+
---
|
6 |
+
|
7 |
+
# Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients (CVPR 2025)
|
8 |
+
|
9 |
+
This repository contains model checkpoints for generating adversarial attacks on spiking neural networks (SNNs), as described in the paper [Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients](https://arxiv.org/abs/2503.03272). This model is *not* a standard image classification model but rather a tool for generating adversarial examples to evaluate the robustness of SNNs.
|
10 |
+
|
11 |
+
**Code:** [https://github.com/ryime/PDSG-SDA](https://github.com/ryime/PDSG-SDA)
|
12 |
+
|
13 |
+
**Checkpoints:** Checkpoints are provided in the repository.
|