In recent years, AI models like ChatGPT and other large deep learning systems have become incredibly powerful, but they also come with a major downside: high energy consumption. Training and running these models takes a lot of computing power, which raises concerns about sustainability and environmental impact. Because of this, researchers are looking for new ways to build efficient AI systems that don’t require as much energy.
One exciting option is Spiking Neural Networks (SNNs). These models are inspired by how the human brain works, using short bursts of electrical activity—called spikes—to send information between neurons. This spike-based communication allows SNNs to work in an event-driven way, meaning they only compute when needed. Due to this, instead of constantly processing data, neurons in an SNN wait until enough input builds up. Once a certain threshold is reached, the neuron “fires” a spike (see Figure 1). This approach cuts down on unnecessary calculations and mimics how real neurons behave. That makes them much more energy-efficient, especially when used with neuromorphic hardware (hardware designed to mimic the brain). Thanks to this efficiency, SNNs are a great fit for power-limited environments like Edge Computing and the Internet of Things (IoT).

Even though SNNs can reach similar accuracy levels to traditional Deep Neural Networks (DNNs), they’re still not widely used. One big reason is that they’re harder to design and require tuning a lot of specific parameters (like when and how neurons fire) that don’t exist in regular DNNs.
Overcoming SNN complexities with HTEC’s GenericSNN framework
To make it easier to design, train, and deploy Spiking Neural Networks (SNNs), several tools like Nengo and Rockpool have been developed in recent years. However, while these frameworks offer useful examples, they often come with steep learning curves. They tend to lack consistent naming conventions and don’t provide beginner-friendly starting points, which can make it tough for new users to dive in and start building real-world applications.
To solve this, HTEC created an open-source framework called GenericSNN. It’s built to be user-friendly, especially for people used to working with traditional Artificial Neural Networks (ANNs). GenericSNN follows familiar design patterns and uses naming conventions from popular libraries like TensorFlow, making it easier to learn and integrate into existing machine learning workflows. Figure 2 illustrates the pipeline used when working with this framework.
The first step is converting the input data into a temporal sequence since SNN neurons need time to collect electrical signals to generate spikes and to get a final stable output. This step, similar to general data processing techniques, needs to be executed by the user to ensure the data will be generated for the specific application the user is researching. Once the user completes this step, GenericSNN provides a streamlined, user-friendly, and supportive workflow during the design, training, and execution of the SNN. An example of this are the ready-made templates for common tasks like classification and regression, so beginners can get started quickly while learning how to fine-tune SNN-specific settings over time.

The benefits of using HTEC’s GenericSNN
HTEC’s GenericSNN offers several key advantages for deploying and using Spiking Neural Network (SNN) models. The most notable benefits include:
- User-friendly model definition, simplifying and accelerating the development process.
- Guided parameter configuration, providing helpful support for new users and streamlining adoption.
- Open-source framework with proven interoperability with widely used AI libraries such as scikit-learn, allowing easy integration.
- Adaptability for different use cases such as classification and regression.
You can find more information about this framework in this paper.
Making a real impact with SNNs
While relatively new compared to traditional neural networks, spiking neural network (SNN) technology shows strong potential in advancing computational systems. By mimicking how biological neurons communicate through discrete spikes, SNNs can outperform conventional models in energy efficiency, low-latency processing, and event-driven tasks, making them especially valuable for edge computing, robotics, and neuromorphic hardware. Below, we’ll explore a real-life use case that highlights the practical advantages of this emerging approach.
Non-invasive accuracy: Redefining people counting with SNNs and Infineon radars
Privacy concerns and energy efficiency are increasingly becoming key considerations for indoor people-counting solutions. This use case demonstrates a cutting-edge approach that leverages radar technology for accurate people detection, ensuring complete privacy, while using Spiking Neural Networks (SNN) to optimize power consumption and processing efficiency.
To achieve this, the HTEC team placed four Infineon BGT60TR13C radars in different corners of a small office room, as illustrated in Figure 3. This diverse sensor placement guarantees that the dataset captures a variety of perspectives, enhancing the robustness and reliability of the model. Each sensor was operated independently, rather than combining data into a single view, making the model adaptable to different sensor configurations and improving its generalization across various environments.
This radar has a maximum detection range of 15 meters and a minimum range resolution of 6 cm, offering exceptional performance for indoor environments. These specifications are more than sufficient for accurate and reliable people-counting, ensuring the system’s effectiveness in privacy-conscious applications.

With each sensor covering a broad area, the system ensures full environmental coverage. The dataset includes a wide range of scenarios, from an empty room to varying occupancy levels (one to five people), recorded on multiple days to capture natural variations in behavior. This data diversity ensures the model is trained to perform under real-world conditions.
Results
To demonstrate the power of the proposed SNN, we compared its performance against several well-established neural networks, including VGG16, ResNet50, and MobileNet. All models were optimized using the Hyperband algorithm, ensuring that each model was tuned for peak performance. The results, displayed in Table 1, highlight SNN’s impressive accuracy and F1-score.

The comparison clearly shows that the SNN outperforms all reference models, with the exception of VGG16, whose performance is comparable to that of the SNN.
However, performance alone is not enough to evaluate the practical applicability of these models—complexity plays a crucial role. We assessed model complexity using two important metrics: the number of parameters and FLOPs (Floating Point Operations). These are summarized in Table 2.

While VGG16 demonstrates strong performance, it comes at a significant computational cost, requiring far more parameters and FLOPs than the SNN. In contrast, the SNN achieves comparable accuracy with significantly lower computational resources, making it a more efficient choice for real-world deployment, especially in energy-sensitive applications.
A more detailed overview of this application can be found in our paper “Spiking Neural Networks for People Counting based on FMCW radar”.
Why spiking neural networks are deep learning’s next frontier
The results clearly demonstrate the advantages of the proposed SNN architecture. Not only does it achieve state-of-the-art performance, surpassing or matching well-established models like VGG16, but it also does so with a fraction of the computational cost. With dramatically fewer parameters and FLOPs, the SNN proves to be an efficient and scalable solution, particularly well-suited for deployment in environments where resources are limited or energy efficiency is critical. This balance of accuracy and efficiency positions the SNN as a compelling alternative to traditional deep learning models for practical, real-world applications.
To support easier adoption of SNNs, HTEC developed GenericSNN, an open-source framework that simplifies the design, training, and deployment process. Built with familiar patterns from libraries like TensorFlow, it helps users, especially those coming from traditional ANN backgrounds, quickly get started with SNNs using ready-made templates and a user-friendly interface.
The promise of SNNs is real and growing. Connect with our experts and learn how you can turn its potential into a real-world impact.