top of page
Writer's pictureSrijon Mandal

Spiking Neural Networks: Advancing AI Architecture




Spiking Neural Networks (SNNs) are a major shift in artificial intelligence as they draw inspiration from the critical biological processes in the human brain. As opposed to traditional artificial neural networks (ANNs) which rely on continuous signals and static activation functions (sigmoid, ReLU), SNNs employ discrete, event-driven spikes to transmit information. They are the next frontier in AI in terms of efficiency, functionality, and adaptability. 

What are ANNs?

Before delving into SNNs, it is necessary to understand the basics of how traditional ANNs function. ANNs are computational models inspired by the structure of the brain and are at the core of modern artificial intelligence. They involve layers of interconnected nodes (neurons) that process and transmit information. Each connection is associated with a weight, which is adjusted during training to improve the network’s performance on various tasks like image recognition and natural language processing. A key feature of  ANNs is that they use continuous signals for processing, where each neuron’s activation is determined by a function of its inputs. Thus, they excel in tasks requiring pattern recognition, but they are also energy-intensive. 

What are SNNs?

Spiking Neural Networks are biologically accurate representations of how the human brain processes information. As mentioned before, ANNs use mathematical functions to compute outputs, whereas SNNs simulate the firing of neurons through electrical spikes. These spikes are binary events (on/off) that occur by mimicking neural processes where the neuron's membrane potential crosses a threshold.

Towards Biological Networks: The Three S’s

  1. Spikes: Biological neurons interact via single-bit spikes — brief bursts of electrical activity. This allows them to transmit information in discrete events rather than continuous signals.

  2. Sparsity: Most biological neurons remain at rest for extended periods, with only a small fraction firing at any moment resulting in sparse activations that conserve energy.

  3. Static Suppression (Event-driven Processing): SNNs simulate biological sensory systems that only process information when triggered by external events, optimizing resource use. It allows systems to focus on only relevant stimuli.


The Leaky Integrate-and-Fire Model

The Leaky Integrate-and-Fire forms the foundation of SNNs. It is critical in modeling the behavior of biological neurons through principles of electrical circuits:

  • Capacitance: The bilipid thin-film membrane of neurons acts like a capacitor as it stores charge

  • Resistance: Ion leakage and transfer across the membrane is resistive, similar to electrical resistors in circuits

  • Spike Generation: When the accumulated charge (voltage) crosses a threshold, the neuron “fires” a spike and resets.

The model can be described as a first-order low-pass filter (RC circuit), governed by Kirchhoff’s Current Law (KCL). The neuron’s membrane potential integrates incoming current over time, similar to how a capacitor in an RC circuit accumulates charge. The membrane potential gradually decays due to leakage, filtering out high-frequency inputs and responding primarily to slow, sustained stimuli. This powerful model captures the dynamic behaviors of biological neurons and forms the basis of SNNs.


Training Spiking Neural Networks

Unlike ANNs, training SNNs is more complex due to the discrete nature of the spikes and temporal encoding. There are three key methods that are employed to train SNNs:

  1. Spike-Timing-Dependent Plasticity (STDP):

STDP is a local learning rule that adjusts synaptic weights (connection strengths) based on the timing of pre and postsynaptic spikes. If a presynaptic neuron fires slightly before the postsynaptic neuron, the connection is strengthened whereas if the presynaptic neuron fires after the postsynaptic neuron, the connection weakens. The learning rule aligns with Hebb’s rule: “Neurons that fire together, wire together.” The strength of adjustments depends on the temporal difference, with closer firing times causing larger weight changes. 

  1. Shadow Training (ANN-to-SNN Conversion): 

Converting a pre-trained ANN into an SNN addresses the dead neuron problem in traditional ANNs where neurons become inactive during backpropagation, especially if their activations are zero for a period of time. This approach is able to leverage the efficiency of ANNs and allows the SNN to perform complex tasks. It retains a high-performing ANN for reference while also creating a power-efficient SNN. However, the converted SNN approximates the ANN which sets an upper-bound on its performance and it also can’t take advantage of the temporal dynamics of SNNs.


  1. Backpropagation Through Time (BPTT Using Spikes): 

The approach of BPTT adapts traditional gradient-based learning for SNNs and the SNN is no longer bounded by the upper limit of an ANN. By using standard backpropagation to handle temporal data, it allows for compatibility with auto differentiation and deep learning techniques. However, the learning is computationally intensive on non-neuromorphic hardware. Additionally, it is less biologically plausible due to its reliance on temporal gradients. 

Why Use SNNs?

  • Energy Efficiency: SNNs compute only when spikes occur, drastically reducing power consumption. This makes them ideal for applications in IoT devices and other scenarios where energy is a major limiting factor.

  • Temporal Processing: SNNs are suited for processing time-dependent data, such as audio signals, speech recognition, robotics, and event-based vision systems.

  • Neuromorphic Compatibility: They are tailored for neuromorphic chips which are specialized processors designed to emulate neural circuits. They allow SNNs to achieve remarkable speed by leveraging sparse computations and event-driven processing. 

  • Biological Plausibility: SNNs’ resemblance to real neural systems makes them valuable for studying brain function and disorders. It is a critical tool that researchers can utilize to better understand complex neural dynamics and simulate neurological conditions.

Applications

  • Brain-Computer Interfaces (BCIs): SNNs are used to model brain activity and improve communication between the brain and external devices, such as prosthetics or assistive technologies.

  • Neuromorphic Computing: SNNs are implemented in neuromorphic hardware to mimic the brain's energy-efficient processing, enabling real-time, low-power computations in devices like robots or smart sensors.

  • Robotics and Autonomous Systems: SNNs help in real-time control systems for robotic applications, particularly in tasks like object recognition and navigation based on sensory inputs.

  • Event-based Vision: SNNs are used for processing data from event-based cameras like dynamic vision sensors, which capture changes in the scene rather than frames, offering temporal resolution for fast-moving objects.

  • Sensory Motor Integration: SNNs are used in simulating and controlling the interaction between sensory inputs and motor outputs, such as in the modeling of reflexes or coordination of movement in robotics.

Challenges and Future Work

The nature of spike-based signals significantly complicates the process of learning for SNNs. Creating efficient training algorithms that address both the biological plausibility as well as the computational feasibility on specialized hardware is a challenge that continues to be tackled by researchers. Also, SNNs achieve their full potential only on neuromorphic hardware which limits their accessibility. As the field is emerging, there is a lack of standardized datasets which complicates initial comparisons. Furthermore, most of the current research focuses on models tested with image datasets, leaving room for other domains to be explored.

Interested to Know more : Please contact srijon@cogniz.org







2 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page