Skip Navigation
Cavendish Cavendish @lemmynsfw.com

Mostly playing around with Stable Diffusion and generative AI imagery. I refuse to call it "art" out of respect for real artists.

📷 pixelfed

🎨 CivitAI

Posts 32
Comments 91
Clowning Around [Album]
  • Lol! I promise this is a one-time thing. 🤡

  • Springtime on the Plaza [Album]
  • Thanks!

  • Question: Recommendations for image hosting that supports both albums and prompt data?
  • The method I've settled on takes a bit of work to put together. First, I upload PNGs to catbox.moe. This preserves metadata so someone can feed the image into the A1111 PNG Info tab, or by copying the url to https://pngchunk.com.

    Next, I upload JPG copies here. That gives me the lemmynsfw hosted url and builds the gallery. Then I put them both together using markdown so that the image gallery is also links to the PNGs. The final format looks like this:

    [![](https://lemmynsfw.com/pictrs/image/59c7f6e6-de70-4354-937b-5b82b67fc195.webp)][1]
    [![](https://lemmynsfw.com/pictrs/image/88b14211-4464-4cd2-bb28-05e781dd5fc8.webp)][2]
    [![](https://lemmynsfw.com/pictrs/image/bf3a69bb-d0f9-4691-b95e-6794880bbc86.webp)][3]
    
    [1]: https://files.catbox.moe/5dsqza.png
    [2]: https://files.catbox.moe/dljkxc.png
    [3]: https://files.catbox.moe/kcqguv.png
    

    This seems to work well. The only hiccup is that I need to include the first image twice, once in the post body so it shows in the gallery, and once as the post header image. That works okay in the browser, but some lemmy mobile apps show it as a duplicate.

    Here's the final result: https://lemmynsfw.com/post/1372540

  • Come feel this carpet with me
  • In the past, I've uploaded to catbox.moe and then provided a link here.

    Edit to add that i'm looking forward to seeing this. I haven't gotten good results with animate diff and realistic models.

  • Strapped in locker room
  • Two belly buttons, or one extremely long belly button?

  • Stylistic Anime Flower tattoos
  • I like the rim lighting on #2

  • Vantage Point
  • thanks!

  • Topless in Jeans
  • Don’t know anything about Perchance, but if you have the option of running Stable Diffusion locally, that will give you a ton of stylistic options.

  • Waking up on the right side of bed
  • Da Vinci did that deliberately with the Mona Lisa, which proves he was an AI.

  • More from the island of Guarma [Album]
  • I hope you're not saying "reverse engineer" like it's a negative or shady practice. I freely share all of my prompts to help people see what's working for me, and I like to explore what's working for everyone else. I've had good success with simpler prompts too, like the one for this parrot: https://civitai.com/images/3050333.

  • More from the island of Guarma [Album]
  • No controlnet or inpainting. Everything was generated in one go with a single prompt. I'll sometimes use regional prompts to set zones for head and torso (usually top 40% is where the head goes, bottom 60% for torso/outfit). But even when I have regional prompting turned off, it will still generate a 3/4 / cowboy shot.

    I assume you pulled the prompt out of one of my images? If not, you can feed them into pngchunk.com. Here's the general format I use with regional prompting:

    *scene setting stuff*
    ADDCOMM
    *head / hair description*
    ADDROW
    *torso/body/pose*
    

    The loras that are in the top (common) section are weighted pretty low, 0.2 - 0.3, because they get repeated/multiplied in each of the two regional rows. So I think at the end they're effectively around 0.6 - 0.8.

    prompt example
    photo of a young 21yo (Barbadian Barbados dark skin:1.2) woman confident pose, arms folded behind back, poised and assured outside (place cav_rdrguarma:1.1),
    (Photograph with film grain, 8K, RAW DSLR photo, f1.2, shallow depth of field, 85mm lens),
    masterwork, best quality, soft shadow
     (soft light, color grading:0.4)
    
    ADDCOMM
    
    sunset beach with ocean and mountains and cliff ruin in the background ,
    (amethyst with violet undertones hair color in a curly layers style:1.2),
     perfect eyes, perfect skin, detailed skin
    
    ADDROW
    
    choker ,
    (pea green whimsical unicorn print bikini set:1.1) (topless:1.3) cameltoe (undressing, panty pull:1.4) 
    (flat breast, normal_nipples :1.4),
    (tan lines, beauty marks:0.6)
    (SkinHairDetail:0.8)
     
    

    It may be that you're not describing the clothing / body enough? My outfit prompts are pretty detailed, so I think that goes a long way for Stable Diffusion to determine how to frame things.

  • More from the island of Guarma [Album]
  • Have fun cooking that new GPU!

  • Leisure Suit Larry - Eve generation (original version)
  • Wow, this is great!

    Here's the original pixel art from 1987, for the youngus amongus:

  • Remember Atari's Strip Poker? Testing Mellisa generation
  • This concept is great! Requesting the girls from Sierra's old Leisure Suit Larry games. Eve in the hot tub... Maybe not the gum-chewing hooker though.

  • Tits for Tats
  • Gorgeous!

  • More animation experiments
  • I have the prompt padding on, without it i get two scenes with just 8 frames. Are you using the v1.5-2 motion model? That one seems to need the additional camera movement loras, otherwise you get very little movement. I went back to the v1.4 motion model, but it kind of stinks for realism. So far, i've only been happy with the text2image workflow. I haven't gotten anything good from img2img.

  • More animation experiments
  • I'm running an Intel 12900K, 3090 24gb vram. Part of the hand issue may be that I'm pushing the resolution beyond spec, up to 768x960. At that res, I can do 32 frames, plus it interpolates an additional 2 between each generated frame, for a total of 124 frames in the final output. I can go up to 48 frames before hitting out of memory errors, but I start getting two completely different scenes per clip at 48.

    Haven't tried adding control net into the mix yet. That's a whole new bag of options that I’m not mentally prepared for.

  • More animation experiments
  • I've tried using less than 75 tokens (literally just "woman on beach wearing dress") and they weren't coming out much different stability wise than my 300+ token monstrosity prompts that lets me play OCD with fabric patterns and hair length and everything else. So I'm not sure why my experience differs so much from the conventional advice. I think the majority of the jumping is from the dynamic prompts. Here's one that didn't change the prompt per-frame (warning: hands!) and it's much more stable: https://files.catbox.moe/rgjbem.mp4. There's definitely a million knobs to fiddle with in these settings, and it's all changing every day anyway, so it's hard to keep up!

  • More animation experiments
  • That's just the nature of Stable Diffusion. I didn't prompt anything about eye color, so the models fall back onto internal biases. On average blonde hair = blue eyes, and brown hair = brown eyes.

  • test awebp
  • Works fine in Bean for iOS

  • Are spoiler tags not working?

    Just a quick test.

    spoiler

    ___ Regular spoiler

    Fancy Custom Spoiler

    ___ Fancier spoiler

    9