Adversarially Robust Spiking Neural Networks with Sparse Connectivity

Research output: Contribution to journalConference articlepeer-review

Abstract

Deployment of deep neural networks in resource-constrained embedded systems requires innovative algorithmic solutions to facilitate their energy and memory efficiency. To further ensure the reliability of these systems against malicious actors, recent works have extensively studied adversarial robustness of existing architectures. Our work focuses on the intersection of adversarial robustness, memory- and energy-efficiency in neural networks. We introduce a neural network conversion algorithm designed to produce sparse and adversarially robust spiking neural networks (SNNs) by leveraging the sparse connectivity and weights from a robustly pretrained artificial neural network (ANN). Our approach combines the energy-efficient architecture of SNNs with a novel conversion algorithm, leading to state-of-the-art performance with enhanced energy and memory efficiency through sparse connectivity and activations. Our models are shown to achieve up to 100× reduction in the number of weights to be stored in memory, with an estimated 8.6× increase in energy efficiency compared to dense SNNs, while maintaining high performance and robustness against adversarial threats.

Original languageEnglish
Pages (from-to)865-883
Number of pages19
JournalProceedings of Machine Learning Research
Volume280
DOIs
Publication statusPublished - 2025
Event2nd Conference on Parsimony and Learning, CPAL 2025 - Stanford, United States
Duration: 24 Mar 202527 Mar 2025

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Adversarially Robust Spiking Neural Networks with Sparse Connectivity'. Together they form a unique fingerprint.

Cite this