The expansion of solar energy has accelerated greatly in the last decade, with a record-breaking 456 gigawatts (GW) of photovoltaic capacity installed worldwide in 2023. This recent increase brings the total operational capacity to over 1.6 terawatts (TW), which amounts to 8.3% of global electricity demand.
Despite its rapid adoption and incredible potential, many organizations and industries remain hesitant to fully commit to solar power. One of the key reasons? Inconsistent energy output and efficiency limitations.
Solar panel performance is affected by fluctuating weather conditions, varying sunlight intensity, and how well the system manages power delivery. If the energy generated isn’t properly regulated, it can lead to waste, inefficiencies, or unreliable power supply—concerns that businesses can’t afford when relying on a stable energy source.
In this climate, fine-tuning the duty cycle, or the panels’ on-time to off-time ratio, is crucial to extract the maximum energy from a solar panel system. In this blog, we’ll explore how an AI-driven solution developed by eesy-innovation, an HTEC company, integrated into Infineon’s PSOC™ Edge platform helps optimize the duty cycle for maximum power utilization while improving the system’s predictability and maintenance.
Optimizing the duty cycle
Duty cycle optimization involves adjusting the on-off switching pattern of a power converter to ensure a solar system operates at the most efficient point. Fluctuating environmental conditions such as temperature and irradiance affect the system’s Maximum Power Point (MPP), or the point at which a panel generates its highest possible power. The system needs to adjust the duty cycle to track and stay as close to the MPP as possible — it shouldn’t produce excessive power that surpasses the load’s capacity or too little power that fails to meet the load’s demands.
Traditionally, duty cycle optimization has been handled using perturb and observe (P&O) algorithms. This method involves iterative testing of multiple duty cycle values and measuring the resulting energy output to find the optimal setting. The downside to using P&O algorithms includes extensive iterations, which can take considerable time to stabilize. During these stabilization times, energy may be lost due to suboptimal configurations. In addition, some of these algorithms adjust voltage and current in discrete increments. These incremental “steps” may be a fixed size, and as the system approaches the MPP, the step size could become too large, causing slight fluctuations in power. These fluctuations may result in the system overshooting or undershooting the exact MPP, leading to oscillations around it.
Overcoming unpredictability with AI neural networks
To overcome the limitations of P&O algorithms in duty cycle optimization, eesy-innovation developed a cutting-edge solution that uses an artificial neural network. This AI model uses real-time data, including ambient temperature, voltage, and current produced by the solar panel, to predict the optimal duty cycle. After testing the model using a publicly available dataset, eesy-innovation found that it could achieve up to 99% accuracy in a single iteration.
Model integration to improve AI operations
To further improve the solar panel system’s efficiency, eesy-innovation’s team of experts integrated the AI algorithm into Infineon’s PSOC™ Edge platform, one of the latest additions to the PSOC™ MCU portfolio, which features a dedicated AI hardware NPU accelerator allowing for the system to perform the AI operations in a more efficient way. This integration reduced the system latency to under one millisecond, compared to the one-second latency reported for P&O algorithms. Additionally, the platform and optimized algorithm use only 2.59 microjoules (µJ) per inference, as measured in a series of internal tests by eesy-innovation experts. Lower energy consumption per inference can reduce operational costs and increase sustainability. When experts executed the same algorithm on the Arm Cortex-M4 platform, the measured energy per inference increased to 109.03 µJ and required 7 milliseconds for inference.
Reducing model size to improve memory efficiency and computational load
Next, eesy-innovation optimized the solution by reducing the AI model size to improve memory efficiency and computational load. The team of experts used model quantization techniques to reduce the memory footprint of the algorithm’s neural network from 1,681 KB to 218 KB while preserving its accuracy. The reduction in memory footprint also minimized the number of processing steps required for execution, ensuring fast and efficient performance.
To achieve this, the team deployed the AI algorithm onto the PSOC™ Edge using Infineon’s ModusToolbox™. This framework supports 8-bit quantized models for users who want to customize the quantization process. Though the platform requires the model to use the well-known TensorFlow Lite (TFLite) format, which is helpful for users who want to execute quantization on their own, it also supports a floating-point Keras model — automatically handling the quantization and optimization with minimal user intervention. This process ensures that the AI algorithm maintains high accuracy while optimizing performance. By converting the AI model into a format suitable for C-based environments, the framework efficiently stores the model’s weights and parameters as uint8 values, enabling faster execution on the AI hardware accelerator. This approach allows users to integrate their AI algorithms into Infineon’s embedded devices seamlessly, ensuring both precision and efficiency, eliminating the need for manual conversion from Python to C code.
More than maximizing solar energy efficiency
While optimizing for maximum power output was the primary aim of the solution, real-time AI insights offered another important benefit — predictive maintenance. The eesy-innovation team created a dedicated user interface, accessible via Infineon’s AIROC CYW55513 tri-band Wi-Fi & Bluetooth single chip combo, to provide continuous insight into the system’s performance. Since the AI model predicts the expected energy output, the real-time predictions can be cross-referenced with actual energy generation. If significant discrepancies arise, it may indicate component degradation, allowing stakeholders to schedule maintenance before performance drops significantly. This innovative approach sets a new benchmark for real-time energy optimization and system health management in renewable energy technologies.
Creating smarter solutions that make a positive impact is at the heart of our AI/ML-driven engineering. Connect with us at Embedded World 2025 in Nuremberg and discover how we can help accelerate innovation in your embedded systems.