Single Image Video Prediction with Auto-Regressive GANs
by , , , , , , ,
Abstract:
In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.
Reference:
Single Image Video Prediction with Auto-Regressive GANs (Jiahui Huang, Yew Ken Chia, Samson Yu, Kevin Yee, Dennis Küster, Eva G. Krumhuber, Dorien Herremans, Gemma Roig), In Sensors, volume 22, 2022.
Bibtex Entry:
@Article{s22093533,
AUTHOR = {Huang, Jiahui and Chia, Yew Ken and Yu, Samson and Yee, Kevin and Küster, Dennis and Krumhuber, Eva G. and Herremans, Dorien and Roig, Gemma},
TITLE = {Single Image Video Prediction with Auto-Regressive GANs},
JOURNAL = {Sensors},
VOLUME = {22},
YEAR = {2022},
NUMBER = {9},
ARTICLE-NUMBER = {3533},
URL = {https://www.mdpi.com/1424-8220/22/9/3533},
ISSN = {1424-8220},
ABSTRACT = {In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.},
DOI = {10.3390/s22093533}
}