Skip to content

xiemk/SPML-LAC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[NeurIPS-22] Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels

The implementation for the paper Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels (NeurIPS 2022).

See much more related works in Awesome Weakly Supervised Multi-Label Learning!

Preparing Data

See the README.md file in the data directory for instructions on downloading and preparing the datasets. (The detailed procedures follow Multi-Label Learning from Single Positive Labels)

Training Model

To train and evaluate a model, the next two steps are required:

  1. For the first stage, we warm-up the model with the AN loss and the PLC regularization. Run:
python first_stage.py --dataset_name=coco --dataset_dir=./data \
--lambda_plc=1 --threshold=0.6 \
--batch_size=32
  1. For the second stage, we train the model by adding the LAC regularization. Run:
python second_stage.py --dataset_name=coco --dataset_dir=./data \
--lambda_plc=1 --threshold=0.9 \
--lambda_lac=1 --temperature=0.5 --queue_size=512 \
--batch_size=32 --is_proj

Hyper-Parameters

To obtain the results reported in the paper, please modify the following parameters:

  1. dataset_name: The dataset to use, e.g. 'coco', 'voc', 'nus', 'cub'.
  2. dataset_dir: The directory of all datasets.
  3. batch_size: The batch size of samples (images).
  4. lambda_plc: The weight of PLC regularization item.
  5. lambda_lac: The weight of LAC regularization item.
  6. threshold: The threshold for pseudo positive labels.
  7. temperature: The temperature for LAC regularization.
  8. queue_size: The size of the Memory Queue.
  9. is_proj: The switch of the projector which generates label-wise embeddings.
  10. is_data_parallel: The switch of training with multi-GPUs.

Misc

  • The range of hyper-parameters can be found in the paper.
  • There are four folders in this directory dataset_dir --- 'coco/', 'voc/', 'nus/', 'cub/'. Please make sure that the path of the dataset is correct before training.
  • We performed all experiments on two GeForce RTX 3090 GPUs, so the os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1". The switch of training with multi-GPUs is False by default, and you can open it with --is_data_parallel.

Reference

If you find the code useful in your research, please consider citing our paper:

@inproceedings{
	xie2022labelaware,
	title={Label-Aware Global Consistency for Multi-Label Learning with Single Positive Labels},
	author={Ming-Kun Xie and Jia-Hao Xiao and Sheng-Jun Huang},
	booktitle={Advances in Neural Information Processing Systems},
	year={2022}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages