Low-light image enhancement aims to raise the quality of pictures taken in dim lighting, resulting in a brighter, clearer, and more visually appealing image without adding too much noise or distortion. One of the state-of-the-art methods for this computer vision task is Zero-DCE. This method uses just a low-light image without any image reference to learn how to produce an image with higher brightness. There are four loss functions crafted specifically for this zero-reference low-light image enhancement method, i.e., color constancy loss, exposure loss, illumination smoothness loss, and spatial consistency loss.
Open the following link and hit the run all in the colab notebook to examine the overall processes.
The quantitative performance of the model is exhibited in the table below.
Metrics | Test Dataset |
---|---|
Color Constancy Loss | 0.065 |
Exposure Loss | 0.391 |
Illumination Smoothness Loss | 0.094 |
Spatial Consistency Loss | 0.042 |
Total Loss | 0.592 |
PSNR | 13.646 |
SSIM | 0.663 |
MAE | 0.170 |
Color constancy loss curve on the train set and the validation set.
Exposure loss curve on the train set and the validation set.
Illumination smoothness loss curve on the train set and the validation set.
Spatial consistency loss curve on the train set and the validation set.
Total loss curve on the train set and the validation set.
PSNR curve on the train set and the validation set.
SSIM curve on the train set and the validation set.
MAE curve on the train set and the validation set.
Here are some samples of the qualitative results of the model.
The qualitative results of the image enhancement method (comparing the original, the ground-truth, the PIL autocontrast, and the prediction).