A Python-based framework for converting 2D sketches and grayscale images into realistic 3D objects. Combines Pix2Pix, Real-ESRGAN, DeOldify, and ShapE models to enhance image quality, colorize, and generate 3D point clouds for detailed reconstructions.
This repository implements a framework for converting 2D sketches or grayscale images into 3D objects. The system combines various models, including Pix2Pix, Real-ESRGAN, DeOldify, and ShapE, to handle tasks such as grayscale-to-RGB conversion, resolution enhancement, colorization, and 3D point cloud generation.
The project aims to improve the process of converting 2D sketches into 3D objects by addressing challenges in texture realism, depth perception, and model generalization. It integrates multiple advanced techniques:
- Pix2Pix (GAN-based) for conditional image generation from sketches.
- Real-ESRGAN for enhancing resolution of generated images.
- DeOldify for colorization of grayscale images.
- ShapE for generating 3D point clouds and converting 2D images to 3D shapes.
This framework is a step forward in improving the real-world applicability of sketch-to-3D tasks, especially in industries such as gaming, animation, and virtual prototyping.
- Sketch-to-3D Pipeline: Convert sketches to detailed 3D objects with depth and texture improvements.
- Modular Design: Supports adding or replacing individual stages of the pipeline, such as image enhancement or 3D point cloud generation.
- Custom Training: Models can be trained on custom datasets, or the existing models can be fine-tuned for specific applications.
This project is implemented in Python and requires several key libraries. Install them using the following:
pip install -r requirements.txt
The required libraries include:
torch
(for deep learning models)torchvision
(for image processing)numpy
(for numerical operations)opencv-python
(for image handling)PIL
(for image processing)Real-ESRGAN
(for resolution enhancement)DeOldify
(for colorization)ShapE
(for 3D point cloud generation)midas
(for depth estimation)
For other dependencies, refer to the requirements.txt
file.
- Clone this repository:
git clone /~https://github.com/hamzaskhaan/sketch-to-3D-reconstruction.git
cd sketch-to-3D-reconstruction
- Install dependencies:
pip install -r requirements.txt
To train Pix2Pix on your own dataset, use the following command:
Ensure that the dataset is prepared in a paired format (e.g., sketches paired with corresponding colorized images).
We welcome contributions to improve the functionality of this framework. If you have suggestions or bug fixes, please fork the repository and submit a pull request. Ensure you follow the contribution guidelines in the CONTRIBUTING.md
file.
This project is licensed under the MIT License - see the LICENSE file for details.
- Pix2Pix: Isola, P., et al. (2017). Image-to-Image Translation with Conditional Adversarial Networks. CVPR.
- Real-ESRGAN: Wang, X., et al. (2021). Real-ESRGAN: Training Real-World Super-Resolution with Realistic Examples. ECCV.
- DeOldify: Antic, J. (2020). DeOldify: Bringing Color to the Past. GitHub.
- ShapE: Saharia, C., et al. (2023). ShapE: Generative Shape-Aware Image-to-Image Translation. NeurIPS.
- Midas: (If you are using the Midas depth estimation model, add the reference here as well).