You are viewing a single thread.
View all comments
0 points

How much detail did you put into the prompt? I had a play around with simple (one sentence) prompts and the results looked impressive. The prompt database was really helpful too.

permalink
report
reply
0 points

I think the most important “trick” was to loop back the refiner a couple of times. The refiner can both remove and add details, or reinforce a particular art style. By piping the latent model output into another ksampler, and repeat this 2-3 times would (for some prompts) consistently greatly improve images.

I don’t know how detailed people have prompts, but these one is about 20 or so descriptive and weighted. It is very consistent in the quality and visual aesthetic, yet creative in the creature design. I’m absolutely amazed by SDXL.

permalink
report
parent
reply

Stable Diffusion

!stablediffusion@lemmy.ml

Create post

Welcome to the Stable Diffusion community, dedicated to the exploration and discussion of the open source deep learning model known as Stable Diffusion.

Introduced in 2022, Stable Diffusion uses a latent diffusion model to generate detailed images based on text descriptions and can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text prompts. The model was developed by the startup Stability AI, in collaboration with a number of academic researchers and non-profit organizations, marking a significant shift from previous proprietary models that were accessible only via cloud services.

Community stats

  • 2

    Monthly active users

  • 11

    Posts

  • 5

    Comments

Community moderators