You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Shanzhi Yin†, Zihan Zhang†, Bolin Chen†, Shiqi Wang† and Yan Ye§
† City University of Hong Kong and § Alibaba Group
Abstruct
This paper proposes to learn generative priors from the motion patterns instead of video contents for generative video compression. The priors are derived from small motion dynamics in common scenes such as swinging trees in the wind and floating boat on the sea. Utilizing such compact motion priors, a novel generative scene dynamics compression framework is built to realize ultra-low bit-rate communication and high-quality reconstruction for diverse scene contents. At the encoder side, motion priors are characterized into compact representations in a dense-to-sparse manner. At the decoder side, the decoded motion priors serve as the trajectory hints for scene dynamics reconstruction via a diffusion-based flowdriven generator. The experimental results illustrate that the proposed method can achieve superior rate-distortion performance and outperform the state-of-the-art conventional video codec Versatile Video Coding (VVC) on scene dynamics sequences.
Methods
Proposed Dynamics-Codec
Subjective Quality Demos
Sequence 015 at 15 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Sequence 031 at 10 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Sequence 006 at 8 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Sequence 024 at 7 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Sequence 029 at 6 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Sequence 034 at 5 kbps
Original Sequence
VVC Reconstruction
Dynamics-Codec Reconstruction
Code Release
Comming soon..
📧 Contact
If you have any question or collaboration need (research purpose or commercial purpose), please email shanzhyin3-c@my.cityu.edu.hk