Ruohao Guo, Xianghua Ying*, Yaru Chen, Dantong Niu, Guangyao Li, Liao Qu, Yanyu Qi, Jinxing Zhou, Bowei Xing, Wenzhen Yue, Ji Shi, Qixun Wang, Peiliang Zhang, Buwen Liang
π₯2025.03.01: Codes and checkpoints are released!
π₯2025.02.27: AVIS got accepted to CVPR 2025! πππ
π₯2024.11.12: Our project page is now available!
π₯2024.11.11: The AVISeg dataset has been uploaded to OneDrive, welcome to download and use!
In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences.
conda create --name avism python=3.8 -y
conda activate avism
conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install -U opencv-python
cd ./AVISM
git clone /~https://github.com/facebookresearch/detectron2
cd detectron2
pip install -e .
cd ../
pip install -r requirements.txt
cd mask2former/modeling/pixel_decoder/ops
sh make.sh
Download and unzip datasets OneDrive and put them in ./datasets
.
Download and unzip pre-trained backbones OneDrive and put them in ./pre_models
.
Download the following checkpoints and put them in ./checkpoints
.
Backbone | Pre-trained Datasets | FSLA | HOTA | mAP | Model Weight |
---|---|---|---|---|---|
ResNet-50 | ImageNet | 42.78 | 61.73 | 40.57 | AVISM_R50_IN.pth |
ResNet-50 | ImageNet & COCO | 44.42 | 64.52 | 45.04 | AVISM_R50_COCO.pth |
Swin-L | ImageNet | 49.15 | 68.81 | 49.06 | AVISM_SwinL_IN.pth |
Swin-L | ImageNet & COCO | 52.49 | 71.13 | 53.46 | AVISM_SwinL_COCO.pth |
python train_net.py --num-gpus 2 --config-file configs/avism/R50/avism_R50_IN.yaml
python train_net.py --config-file configs/avism/R50/avism_R50_IN.yaml --eval-only MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth
python demo_video/demo.py --config-file configs/avism/R50/avism_R50_IN.yaml --opts MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth
We thank the great work from Detectron2, Mask2Former and VITA.
If our work assists your research, feel free to give us a star β or cite us using
@article{guo2023audio,
title={Audio-Visual Instance Segmentation},
author={Guo, Ruohao and Ying, Xianghua and Chen, Yaru and Niu, Dantong and Li, Guangyao and Qu, Liao and Qi, Yanyu and Zhou, Jinxing and Xing, Bowei and Yue, Wenzhen and Shi, Ji and Wang, Qixun and Zhang, Peiliang and Liang, Buwen},
journal={arXiv preprint arXiv:2310.18709},
year={2023}
}