Skip to content

Latest commit

 

History

History
90 lines (65 loc) · 4.01 KB

README.md

File metadata and controls

90 lines (65 loc) · 4.01 KB
Shangkun Sun, Xiaoyu Liang, Songlin Fan, Wenxu Gao, Wei Gao*

(* Corresponding author)

from MMCAL, Peking University

🎦 Introduction

TL;DR: VE-Bench is an evaluation suite for text-driven video editing, consisting of a quality assessment model to provide a human-aligned metric for edited videos, and a database containing rich video-prompt pairs and the corresponding human scores.


Overview of the VE-Bench Suite

VE-Bench DB contains a rich collection of source videos, including real-world videos, AIGC videos, and CG videos, covering various aspects such as people, objects, animals, and landscapes. It also includes a variety of editing instructions across different categories, including semantic editing like addition, removal, replacement, etc., as well as structural changes in size, shape, etc., and stylizations such as color, texture, etc. Additionally, it features editing results based on different video editing models. We conducted a subjective experiment involving 24 participants from diverse backgrounds, resulting in 28,080 score samples. We further trained VE-Bench QA model based on this data. The left image below shows the box plot of average scores obtained by each model during the subjective experiment, while the right image illustrates the scores for each model across different types of prompts.


Left: Average score distributions of 8 editing methods.     Right: Performance on different types of prompts from previous video-editing methods.

Easy Use

VE-Bench can be installed with a single pip command. Since the model employs normalization during training, its output does not represent exactly absolute 1 ~ 10 scores, as demonstrated below:

pip install vebench

When comparing videos, you can use python test.py, namely:

from vebench import VEBenchModel

evaluator = VEBenchModel()

score1 = evaluator.evaluate('A black-haired boy is turning his head', 'assets/src.mp4', 'assets/dst.mp4')
score2 = evaluator.evaluate('A black-haired boy is turning his head', 'assets/src.mp4', 'assets/dst2.mp4')
print(score1, score2) # Score1: 1.3563, Score2: 0.66194

Local Inference

💼 Preparation

cd vebench

You can also download all checkpoints from google drive and put them into ckpts.

✨ Usage

To evaluate one single video:

python -m infer.py --single_test --src_path ${path_to_source_video} --dst_path ${path_to_dst_video} --prompt ${editing_prompt}

# Run on example videos
# python -m infer.py --single_test --src_path "./data/src/00433tokenflow_baby_gaze.mp4" --dst_path "./data/edited/00433tokenflow_baby_gaze.mp4" --prompt "A black-haired boy is turning his head" 

To evaluate a set of videos:

python -m infer.py --data_path ${path_to_data_folder} --label_path ${path_to_prompt_txt_file}

🙏 Acknowledgements

Part of the code is developed based on DOVER and BLIP. We would like to thank the authors for their contributions to the community.

📭 Contact

If your have any comments or questions, feel free to contact sunshk@stu.pku.edu.cn.

📖 BibTex

@article{sun2024bench,
  title={VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment},
  author={Sun, Shangkun and Liang, Xiaoyu and Fan, Songlin and Gao, Wenxu and Gao, Wei},
  journal={arXiv preprint arXiv:2408.11481},
  year={2024}
}