"Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
-
Updated
Dec 12, 2023 - Python
"Effective Whole-body Pose Estimation with Two-stages Distillation" (ICCV 2023, CV4Metaverse Workshop)
[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing
[CVPR2024] DisCo: Referring Human Dance Generation in Real World
Wunjo CE: Face Swap, Lip Sync, Control Remove Objects & Text & Background, Restyling, Audio Separator, Clone Voice, Video Generation. Open Source, Local & Free.
Transfer the ControlNet with any basemodel in diffusers🔥
A microframework on top of PyTorch with first-class citizen APIs for foundation model adaptation
a self-hosted webui for 30+ generative ai
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox. Equipped with high performance and flexibility.
Always focus on prompting and generating
A tab for sd-webui for replacing objects in pictures or videos using detection prompt
[ICLR 2025] Codebase for "CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation"
Official Code Release for [SIGGRAPH 2024] DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation
[NeurIPS 2023] Customize spatial layouts for conditional image synthesis models, e.g., ControlNet, using GPT
Low resolution upscaling fix for Clarity AI - Upscale and enhance your images with AI
[SIGGRAPH 2024] "EASI-Tex: Edge-Aware Mesh Texturing from Single Image", ACM Transactions on Graphics.
Transform your simple scribbles into architectural designs using style transfer with Stable Diffusion, LCM, IP Adapters and ControlNet. Scribble Architect combines creativity with generative AI technology, improving the inspiration process.
Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2video model for extremely long video generation.
This repository provides an interactive image colorization tool that leverages Stable Diffusion (SDXL) and BLIP for user-controlled color generation. With a retrained model using the ControlNet approach, users can upload images and specify colors for different objects, enhancing the colorization process through a user-friendly Gradio interface.
Apply controlnet to video clips
diffusion lora chinese tutorial,虚拟idol训练中文教程
Add a description, image, and links to the controlnet topic page so that developers can more easily learn about it.
To associate your repository with the controlnet topic, visit your repo's landing page and select "manage topics."