Augmentation of rPPG Benchmark Datasets: Learning to Remove and Embed rPPG Signals via Double Cycle Consistent Learning from Unpaired Facial Videos
Cheng-Ju Hsieh, Wei-Hao Chung, Chiou-Ting Hsu
;
Abstract
"Remote estimation of human physiological condition has attracted urgent attention during the pandemic of COVID-19. In this paper, we focus on the estimation of remote photoplethysmography (rPPG) from facial videos and address the deficiency issues of large-scale benchmarking datasets. We propose an end-to-end RErPPG-Net, including a Removal-Net and an Embedding-Net, to augment existing rPPG benchmark datasets. In the proposed augmentation scenario, the Removal-Net will first erase any inherent rPPG signals in the input video and then the Embedding-Net will embed another PPG signal into the video to generate an augmented video carrying the specified PPG signal. To train the model from unpaired videos, we propose a novel double-cycle consistent constraint to enforce the RErPPG-Net to learn to robustly and accurately remove and embed the delicate rPPG signals. The new benchmark ""Aug-rPPG dataset"" is augmented from UBFC-rPPG and PURE datasets and includes 5776 videos from 42 subjects with 76 different rPPG signals. Our experimental results show that existing rPPG estimators indeed benefit from the augmented dataset and achieve significant improvement when fine-tuned on the new benchmark. The code and dataset are available at https://github.com/nthumplab/RErPPGNet."
Related Material
[pdf]
[supplementary material]
[DOI]