metadata
pipeline_tag: image-classification
license: mit
library_name: pytorch
Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients (CVPR 2025)
This repository contains model checkpoints for generating adversarial attacks on spiking neural networks (SNNs), as described in the paper Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients. This model is not a standard image classification model but rather a tool for generating adversarial examples to evaluate the robustness of SNNs.
Code: https://github.com/ryime/PDSG-SDA
Checkpoints: Checkpoints are provided in the repository.