Abstract

We introduce OneDiffusion, a versatile, large-scale diffusion model that seamlessly supports bidirectional image synthesis and understanding across diverse tasks. It enables conditional generation from inputs such as text, depth, pose, layout, and semantic maps, while also handling tasks like image deblurring, upscaling, and reverse processes such as depth estimation and segmentation. Additionally, OneDiffusion allows for multi-view generation, camera pose estimation, and instant personalization using sequential image inputs. Our model takes a straightforward yet effective approach by treating all tasks as frame sequences with varying noise scales during training, allowing any frame to act as a conditioning image at inference time. Our unified training framework removes the need for specialized architectures, supports scalable multi-task training, and adapts smoothly to any resolution, enhancing both generalization and scalability. Experimental results demonstrate competitive performance across tasks in both generation and prediction such as text-to-image, multiview generation, ID preservation, depth estimation and camera pose estimation despite relatively small training dataset. Our code and checkpoint are freely available at this https URL

Paper: https://arxiv.org/abs/2411.16318

Code: https://github.com/lehduong/OneDiffusion?tab=readme-ov-file

Model: https://huggingface.co/lehduong/OneDiffusion

Project Page: https://lehduong.github.io/OneDiffusion-homepage/

You are viewing a single thread.
View all comments
2 points

Orientation

Oh wow, it made him a gay cowboy.

permalink
report
reply
2 points

Broke back convolution

permalink
report
parent
reply

Stable Diffusion

!stable_diffusion@lemmy.dbzer0.com

Create post

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

Community stats

  • 138

    Monthly active users

  • 276

    Posts

  • 231

    Comments

Community moderators