quixote84
I doubt this will happen, and if it did it would do more harm than good. His followers would liken him to Paul.
The hallucinations are my favorite part of making these things. I spend maybe 5 minutes tweaking the prompt, and I stop when it hits that sweet spot of “This tells my instincts that I should not believe what I’m seeing”. Pair it with a lie, and it’s an LLM image recognition training tool in addition to being a meme.
Based on what I got to see regarding the development of Helix vs Dome in “Twitch Plays Pokemon Red”, I’ve got a hunch that those 40 years in the desert a few thousand years ago were nothing to write home about either.
Mythology spews forth from wherever. It’s arbitrary and yet still meaningful.
If there’s a spiritual practice out there which is genuinely capable of improving one’s health, it’s going to look nothing like either hustle culture or straight up health denial. In 2009 I weighed around 350 as well. I’m down to around 240 now with more yet to go.
Sugar was my entire problem.
I don’t believe that it’s mine to dole out.
I was included in the e-mail conversation where they were passing bits of site code and data around as part of the attempt to rebuild it, and was encouraged to make more LLM images with the data. At no point was I ever told “This is yours now to do with what you wish”. What I was told is that “the idea of applying AI generated images, hallucinations and all, into posts about lies is brilliant”. They’re from the UK, so in this context I believe “brilliant” means “kinda cool”.
Whatever the case, there’s a vast gulf between “We like what you’re doing”, and “please give 20 years of our effort to everyone who asks for it”. I will pass along to the guys that the first time I posted a lie anywhere other than my old faceplant page multiple parties immediately expressed interest in seeing the site ride again.
I know some folks don’t like to hear this, but it isn’t as easy as just dumping any random text you want into an LLM and getting a viable image you’re happy with on the other side. Each time I make an image for a DWOL lie, it involves six to eighteen base images with progressively tweaked prompts until I’m satisfied with the output. Dumping the lie straight into an LLM and rolling with whatever output I receive ends up looking more like the image attached to this comment.