Skip to content
/ avis Public

[CVPR 2025] πŸ”₯ Official impl. of "Audio-Visual Instance Segmentation".

License

Notifications You must be signed in to change notification settings

ruohaoguo/avis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Audio-Visual Instance Segmentation

AVIS Project Page Dataset

Ruohao Guo, Xianghua Ying*, Yaru Chen, Dantong Niu, Guangyao Li, Liao Qu, Yanyu Qi, Jinxing Zhou, Bowei Xing, Wenzhen Yue, Ji Shi, Qixun Wang, Peiliang Zhang, Buwen Liang

πŸ“° News

πŸ”₯2025.03.01: Codes and checkpoints are released!

πŸ”₯2025.02.27: AVIS got accepted to CVPR 2025! πŸŽ‰πŸŽ‰πŸŽ‰

πŸ”₯2024.11.12: Our project page is now available!

πŸ”₯2024.11.11: The AVISeg dataset has been uploaded to OneDrive, welcome to download and use!

🌿 Introduction

In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences.

radar.

βš™οΈ Installation

conda create --name avism python=3.8 -y
conda activate avism

conda install pytorch==1.9.0 torchvision==0.10.0 cudatoolkit=11.1 -c pytorch -c nvidia
pip install -U opencv-python

cd ./AVISM
git clone /~https://github.com/facebookresearch/detectron2
cd detectron2
pip install -e .

cd ../
pip install -r requirements.txt
cd mask2former/modeling/pixel_decoder/ops
sh make.sh

πŸ€— Setup

Datasets

Download and unzip datasets OneDrive and put them in ./datasets.

Pretrained Backbones

Download and unzip pre-trained backbones OneDrive and put them in ./pre_models.

Checkpoints

Download the following checkpoints and put them in ./checkpoints.

Backbone Pre-trained Datasets FSLA HOTA mAP Model Weight
ResNet-50 ImageNet 42.78 61.73 40.57 AVISM_R50_IN.pth
ResNet-50 ImageNet & COCO 44.42 64.52 45.04 AVISM_R50_COCO.pth
Swin-L ImageNet 49.15 68.81 49.06 AVISM_SwinL_IN.pth
Swin-L ImageNet & COCO 52.49 71.13 53.46 AVISM_SwinL_COCO.pth

πŸ“Œ Getting Started

Training

python train_net.py --num-gpus 2 --config-file configs/avism/R50/avism_R50_IN.yaml

Evaluation

python train_net.py --config-file configs/avism/R50/avism_R50_IN.yaml --eval-only MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth

Demo

python demo_video/demo.py --config-file configs/avism/R50/avism_R50_IN.yaml --opts MODEL.WEIGHTS checkpoints/AVISM_R50_IN.pth

Acknowledgement

We thank the great work from Detectron2, Mask2Former and VITA.

πŸ“„ Citation

If our work assists your research, feel free to give us a star ⭐ or cite us using

@article{guo2023audio,
  title={Audio-Visual Instance Segmentation},
  author={Guo, Ruohao and Ying, Xianghua and Chen, Yaru and Niu, Dantong and Li, Guangyao and Qu, Liao and Qi, Yanyu and Zhou, Jinxing and Xing, Bowei and Yue, Wenzhen and Shi, Ji and Wang, Qixun and Zhang, Peiliang and Liang, Buwen},
  journal={arXiv preprint arXiv:2310.18709},
  year={2023}
}

Releases

No releases published

Packages

No packages published

Languages