Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients (CVPR 2025)

This repository contains model checkpoints for generating adversarial attacks on spiking neural networks (SNNs), as described in the paper Towards Effective and Sparse Adversarial Attack on Spiking Neural Networks via Breaking Invisible Surrogate Gradients. This model is not a standard image classification model but rather a tool for generating adversarial examples to evaluate the robustness of SNNs.

Code: https://github.com/ryime/PDSG-SDA

Checkpoints: Checkpoints are provided in the repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support image-classification models for pytorch library.