DragDiffusion: Harnessing Diffusion Models
for Interactive Point-based Image Editing


Yujun Shi1
Chuhui Xue2
Jun Hao Liew2
Jiachun Pan1
Hanshu Yan2
Wenqing Zhang2
Vincent Y. F. Tan1
Song Bai2


National University of Singapore     ByteDance




Abstract

Precise and controllable image editing is a challenging task that has attracted significant attention. Recently, DragGAN enables an interactive point-based image editing framework and achieves impressive editing results with pixel-level precision. However, since this method is based on generative adversarial networks (GAN), its generality is upper-bounded by the capacity of the pre-trained GAN models. In this work, we extend such an editing framework to diffusion models and propose DragDiffusion. By leveraging large-scale pretrained diffusion models, we greatly improve the applicability of interactive point-based editing in real world scenarios. While most existing diffusion-based image editing methods work on text embeddings, DragDiffusion optimizes the diffusion latent to achieve precise spatial control. Although diffusion models generate images in an iterative manner, we empirically show that optimizing diffusion latent at one single step suffices to generate coherent results, enabling DragDiffusion to complete high-quality editing efficiently. Extensive experiments across a wide range of challenging cases (e.g., multi-objects, diverse object categories, various styles, etc.) demonstrate the versatility and generality of DragDiffusion.




Dragging Trajectories (Generated Images)

user edit
dragging trajectory
user edit
dragging trajectory

user edit
dragging trajectory
user edit
dragging trajectory

user edit
dragging trajectory



Dragging Trajectories (Real Images)

user edit
user edit
user edit
user edit
user edit


dragging trajectory
dragging trajectory
dragging trajectory
dragging trajectory
dragging trajectory



More Dragging Results

General objects

Results figure

Arts

Results figure

Animals

Results figure

Scenes

Results figure


Paper

Paper thumbnail

DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing

Yujun Shi, Chuhui Xue, Jiachun Pan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai

arXiv, 2023.


            @article{shi2023dragdiffusion,
                title={DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing},
                author={Shi, Yujun and Xue, Chuhui and Pan, Jiachun and Zhang, Wenqing and Tan, Vincent YF and Bai, Song},
                journal={arXiv preprint arXiv:2306.14435},
                year={2023}
            }
        



Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful project, and inherits the modifications made by Jason Zhang and Elliott Wu.