Skip to content

Code for PnP-Flow: Plug-and-Play image restoration with Flow Matching (ICLR 2025)

Notifications You must be signed in to change notification settings

annegnx/PnP-Flow

Repository files navigation

PnP-Flow

build License

This GitHub repository contains the code for our ICLR 2025 PnP-Flow paper, a method combining PnP methods with Flow Matching pretrained models for solving image restoration problems. Try out the demo!

1. Getting started

To get started, clone the repository and install pnpflow via pip

cd PnP-Flow
pip install -e .

1.1. Requirements

  • torch 1.13.1 (or later)
  • torchvision
  • tqdm
  • numpy
  • pandas
  • pyyaml
  • scipy
  • torchdiffeq
  • deepinv

1.2. Download datasets and pretrained models

We provide a script to download datasets used in PnP-Flow and the corresponding pre-trained networks. The datasets and network checkpoints will be downloaded and stored in the data and model directories, respectively.

CelebA. To download the CelebA dataset and the pre-trained OT FM network (U-Net), run the following commands:

bash download.sh celeba-dataset
bash download.sh pretrained-network-celeba

AFHQ-CAT. To download the AFHQ-CAT dataset and the pre-trained OT FM network (U-Net), run the following commands:

bash download.sh afhq-cat-dataset
bash download.sh pretrained-network-afhq-cat

Note that as the dataset AFHQ-Cat doesn't have a validation split, we create one when downloading the dataset.

Alternatively, the FM models can directly be downloaded here: CelebA model, AFHQ-Cat model, MNIST-Dirichlet model

And the denoisers for the PnP-GS method here: CelebA model, AFHQ-Cat model

2. Training

You can also use the code to train your own OT Flow Matching model.

You can modify the config options directly in the main_config.yaml file located in config/. Alternatively, config keys can be given as options directly in the command line.

For example, to train the generative flow matching model (here, the U-net is the velocity) on CelebA, with a Gaussian latent distribution, run:

python main.py --opts dataset celeba train True eval False batch_size 128 num_epoch 100

At each 5 epochs, the model is saved in ./model/celeba/gaussian/ot. Generated samples are saved in ./results/celeba/gaussian/ot.

Computing generative model scores

After the training, the final model is loaded and can be used for generating samples / solving inverse problems. You can compute the full FID (based on 50000 generated samples), the Vendi score, and the Slice Wasserstein score running

python main.py --opts dataset mnist train False eval True compute_metrics True solve_inverse_problem False

3. Solving inverse problems

The available inverse problems are:

  • Denoising --> set problem: 'denoising'
  • Gaussian deblurring --> set problem: 'gaussian_deblurring'
  • Super-resolution --> set problem: 'superresolution'
  • Box inpainting --> set problem: 'inpainting'
  • Random inpainting --> set problem: 'random_inpainting'
  • Free-form inpainting --> set problem: 'paintbrush_inpainting'

The parameters of the inverse problems (e.g., noise level) can be adjusted manually in the main.py file.

The available methods are

  • pnp_flow (our method)
  • ot_ode (from this paper)
  • d_flow (from this paper)
  • flow_priors (from this paper)
  • pnp_diff (from this paper)
  • pnp_gs (from this paper)

3.1. Finding the optimal parameters on the validation set

The optimal parameters can tuned running

python bash scripts/script_val.sh

You can also use the optimal values we found, as reported in the Appendix of the paper, and input them into the configuration files of the methods.

3.2. Evaluation on the test set

You can either directely run

python main.py --opts dataset celeba train False eval True problem inpainting method pnp_flow

or the use the bash file scripts/script_val.sh.

Visual results will be saved in results/celeba/gaussian/inpainting.

Acknowledgements

This repository builds upon the following publicly available codes:

About

Code for PnP-Flow: Plug-and-Play image restoration with Flow Matching (ICLR 2025)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •