The idea that a video camera could one day help monitor patient recovery in real time is no longer just an idea — it’s an emerging frontier in medical research. With support from advancements in computer vision, signal processing, and machine learning, researchers are exploring how everyday tools — like smartphone and video cameras — can provide continuous physiological insights. In reconstructive surgery, one critical challenge is tracking blood flow in newly transplanted tissue (free flaps) during the first 48 hours post-operation. Poor perfusion can lead to tissue failure, yet current monitoring techniques such as Doppler or CT scans are costly, labor-intensive, and not suited for continuous observation.
This research explores imaging photoplethysmography (iPPG) — a method for detecting subtle skin color changes caused by blood flow using only RGB video — as a low-cost, contactless alternative. While most prior studies rely on static images, this project is among the first to analyze full-length videos to observe circulation over time. In a pioneering collaboration, HTEC, academic researchers, and clinicians at Oslo University Hospital developed a signal processing pipeline to extract heart rate and perfusion data from video and laid the foundation for future machine learning models that could automate post-surgical flap monitoring.
Challenge: Exploring contactless detection of blood flow in surgery
Non-contact vital signs sensing has drawn growing interest in both research and clinical communities, offering a more convenient and non-restrictive way to estimate key health parameters. In surgical contexts, where minimizing patient risk and intervention time is critical, the ability to capture physiological signals without physical contact presents a powerful opportunity — particularly in procedures like free flap reconstruction, where tracking tissue perfusion is essential.
HTEC’s team set out to test whether standard video recordings could be used to locate blood perforators below the skin surface — a key step in ensuring the success of flap transplantation. The goal was to build a proof of concept that could ultimately reduce the dependence on time-sensitive and invasive tools like CT angiography and pave the way for more accessible and cheaper care.
To explore this, the team recorded patients in operating room conditions using a video camera and a smartphone, under two lighting scenarios (with and without surgical lights). The core challenge: extract meaningful, high-fidelity physiological signals from noisy, real-world surgical videos — and prove that this low-cost, contactless approach could work.

Another crucial step in this process was identifying and isolating the region of interest (ROI) — the area of the skin most likely to reflect underlying blood flow from the dominant perforator. However, the ROI was subject to significant movement caused by the patient’s breathing, which introduced motion artifacts that could interfere with signal extraction. To address this, the team implemented a motion tracking approach to reduce artifacts and maintain signal quality throughout the video.
Solution: Building the pipeline for video-based physiological sensing
To explore their hypothesis that video-based physiological sensing could be used in clinical settings, HTEC’s team and their collaborators built a comprehensive pipeline that extracted meaningful physiological signals from video using signal processing and laid the groundwork for AI-based analysis.
Patients were recorded using both a video camera and a smartphone in two lighting conditions, and contact-based pulse measurements were collected in parallel to validate the results. Despite the variability introduced by ambient lighting and patient motion caused by breathing, heart rate values estimated from the videos remained within ±5 beats per minute of those obtained via contact sensors, confirming strong baseline accuracy.
A manually selected region of interest (ROI) near the umbilicus — where blood perforators are typically located — was tracked throughout the video using a motion compensation technique to account for artifacts caused by breathing. From this region, subtle changes in skin color were amplified using a signal extraction method known to perform well in facial tracking contexts, and two estimation methods — one time-based and one frequency-based — were applied to derive pulse rate.
For perfusion mapping, the ROI was divided into small cells, and two techniques were tested to visualize localized blood flow: pixel-wise correlation and amplitude filtering. While results were variable, both methods revealed patterns suggestive of underlying physiological structures, highlighting the promise of this approach.
These signal processing methods, grounded in standard techniques, but tested in a novel clinical context, proved to be effective in demonstrating the feasibility of extracting clinically relevant insights from routine video recordings — a critical step toward future automation.
Machine learning potential
While this project phase focused primarily on signal extraction and visualization, it also laid the foundation and made recommendations for future machine learning–driven analysis of tissue perfusion. Although the initial focus of this work was on identifying perforator locations, the methods developed — particularly for extracting and analyzing localized perfusion signals — proved to have broader potential. As the project progressed, the emphasis naturally shifted toward post-operative monitoring, where the same signal features can be used to track tissue health over time. This evolution reflects a move from static anatomical mapping toward dynamic, real-time assessment, laying the groundwork for intelligent recovery monitoring tools.
The research established a strong conceptual foundation for future AI applications in tissue monitoring, highlighting several promising directions:
- Pattern discovery through self-supervised learning, using clustering techniques to identify distinct perfusion zones without the need for manual labeling.
- Segment-based tracking of perfusion dynamics, capturing how signal characteristics — such as amplitude, quality, and periodicity — evolve over time within localized tissue regions.
- Smart integration with biosignals like SpO₂ and ECG, enabling more holistic, label-efficient monitoring with minimal clinical input.
Recommendation: To move from feasibility to clinical readiness, future development should focus on integrating video-derived features with available biosignals (e.g., SpO₂, ECG) and building robust ML models that can classify perfusion states (e.g. “stable,” “low perfusion,” or “improving circulation”) in real time, even under varied lighting and motion conditions.
Success: Laying the groundwork for AI-powered surgical monitoring
These joint efforts mark an important step toward realizing video-based, AI-assisted perfusion monitoring that is affordable, scalable, and deployable in real-time clinical workflows. Achievements include:
- Demonstrated feasibility of extracting reliable physiological signals from video alone.
- Validated correlation between signal intensity and perfusion quality, initially in relation to anatomical landmarks and later in dynamic post-operative contexts.
- Outlined a clear AI pathway for building intelligent models to classify perfusion quality and predict complications without manual annotation.
- Created reproducible, low-cost methodology using everyday hardware (smartphones, video cameras), making it particularly impactful for resource-constrained healthcare settings.
By bringing together deep domain knowledge in signal processing, surgical practice, and ML, HTEC’s work in iPPG research shows what’s possible when forward-thinking engineering meets healthcare innovation. It proves that video-based physiological sensing is feasible and lays the groundwork for future AI models that could autonomously assess tissue perfusion and flag early signs of surgical complications.
Interested in applying AI to clinical research?
If you’re exploring non-invasive monitoring or other AI-driven healthcare innovations, our team would love to hear from you. Reach out to discuss how we can support your next breakthrough.