So the prompt I used was:

“The best space program ever made showing off what it can do. No space shuttle can be in the picture as it was a failed project. BING NO SHUTTLES. NO SHUTTLE! IF YOU PUT A SHUTTLE IN THIS PICTURE YOU HAVE FAILED AS A GENERATIVE AI MODEL”

Not salty at all…

1 point

Instructions unclear, currently traveling through alternate wormholes…

permalink
report
reply
1 point

Like I get it the shuttle looks cool. But it also set records for the most expensive to orbit costs, while it had the purpose of saving money getting things to orbit.

I am just baffled why a program that costs more then the entire Apollo program (both adjusted for inflation) is somehow the poster vessel for space flight.

permalink
report
parent
reply
1 point

Diffusion models do not parse natural language like a LLM. All behavior that appears like NLP is illusionary. For the most part. You can get away with some things because of what is present in the training corpus. However, any time you use a noun, you are making a weighted image priority. By repeating “shuttle” in this prompt, you’ve heavily biased to feature the shuttle regardless of the surrounding context. It is not contextualising, it is ‘word weighting’. There is a relationship to the other words of the prompt, but they are not conceptually connected.

In a LLM there are special tokens that are used to dynamically ensure that the key points of the input are connected to the output, but this system is not present in generative AI.

To illustrate, I like to download LoRA’s to use on offline models, I use a few tools to probe them and determine how they were made, like the tags used with training images, what base model was used, and the training settings they used. Around a third of LoRA’s I have downloaded contain natural language in the images that were tagged. This means the LoRA related term I use for generating should be done with natural language.

This is the same principal required for any model. You should always ask yourself, how often is this terminology occurring in the tags below an image. You might check out gelbooru or danbooru just to have a look at the tags system used there for all images. That is very similar to how training happens for the vast majority of imagery. It is very simplified overall.

The negative prompt is very different in how it is processed compared to the positive. If you look at the respective documentation for the tool you’re using, they might make some syntax available to create a negative line, but they likely want you to use their API with a more advanced tool.

permalink
report
reply
2 points

You got Streisand’ed

permalink
report
reply
1 point

The ones before the mention where not better…

permalink
report
parent
reply

AI Generated Images

!imageai@sh.itjust.works

Create post

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator’s discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

Community stats

  • 1.7K

    Monthly active users

  • 1.4K

    Posts

  • 4.1K

    Comments