Skip to content

jyrao/UniSoccer

Repository files navigation

UniSoccer: Towards Universal Soccer Video Understanding

This repository contains the official PyTorch implementation of paper "Towards Universal Soccer Video Understanding": https://arxiv.org/abs/2412.01820/.

Project Page $\cdot$ Paper $\cdot$ Dataset (Soon) $\cdot$ Checkpoints

News

  • [2025.01] We open-sourced our codes and checkpoints for UniSoccer.
  • [2024.12] Our pre-print paper is released on arXiv.

Requirements

A suitable conda environment named UniSoccer can be created and activated with:

conda env create -f environment.yaml
conda activate UniSoccer

Train

Pretrain MatchVision Encoder

As described in paper, we have two methods for pretraining MatchVision backbone (supervised classification & contrastive commentary). You can train both this two methods as following shows:

First of all, you should prepare textual data as the format in train_data/json, and preprocess soccer videos into 30 second clips (15s before and after timestamps) for pretraining.

Supervised Classification

python task/pretrain_MatchVoice_Classifier.py config/pretrain_classification.py

Contrastive Commentary Retrieval

python task/pretrain_contrastive.py config/pretrain_contrastive.py

Also, you could finetune MatchVision with

python task/finetune_contrastive.py config/finetune_contrastive.py

To be noted, you should replace the folders in task and config files.

Train Downstream Tasks

You could train the commentary task by several different methods:

  1. Use mp4 files
python task/downstream_commentary_new_benchmark.py 

For this method, you might train the commentary model MatchVoice with open visual encoder or language decoder, so you should crop the videos as 30s clips named as json files shows.

  1. Use .npy files
python task/downstream_commentary.py

For this method, you cannot open the visual encoder, so you can extract features of all video clips and change ".mp4" by ".npy" as file names.

To be noted, folder words_world records the token ids of all words in LLaMA-3(8B) tokenizer of different datasets as

  • match_time.pkl: MatchTime dataset (Link here)
  • soccerreplay-1988.pkl: SoccerReplay-1988 dataset. (Not released yet)
  • merge.pkl: Union set of MatchTime & SoccerReplay-1988

Inference

For inference, you could use the following codes, be sure that you have correctly crop the video clips, which is in the same format as before.

python inference/inference.py

Then, you could test the metrics for output sample.csv by:

python inference/score_single.py --csv_path inference/sample.csv

Citation

If you use this code and data for your research or project, please cite:

@misc{rao2024unisoccer,
        title   = {Towards Universal Soccer Video Understanding},
        author  = {Rao, Jiayuan and Wu, Haoning and Jiang, Hao and Zhang, Ya and Wang, Yanfeng and Xie, Weidi},
        journal = {arXiv preprint arXiv:2412.01820},
        year    = {2024},
  }

TODO

  • Release Paper
  • Release Checkpoints
  • Release Dataset
  • Code of Visual Encoder Pretraining
  • Code of Downstream Tasks
  • Code of Inference
  • Code of Evaluation

Acknowledgements

Many thanks to the code bases from Video-LLaMA and MatchTime, and source data from SoccerNet-Caption and MatchTime.

Contact

If you have any questions, please feel free to contact jy_rao@sjtu.edu.cn or haoningwu3639@gmail.com.

Releases

No releases published

Packages

No packages published

Languages