I've been using Stable Diffusion (via Automatic1111) for a long time, I've become fairly adept at it. Recently Bing's Dalle-3 has surpassed it in terms of composition and instruction-following, but I still find it really important for doing "finishing" work on Dalle-3's outputs so I don't expect to stop using it any time soon.
Lately I've been experimenting with Koboldcpp and locally-run large language models. I've been coming up with little ideas for scripts and programs that use its API to do stuff.
I once used Craiyon.com to generate an image of an NPC for an online D&D game I was DM'ing. (And if you zoomed in too far, you could see it was a little fucked up.) Aside from that, none.
Same here. Needed an image of Uncle Sam as an Air Genasi. Can't get stuff that specific without a comission (which is expensive and not worth it for a joke sidequest) or AI, so AI it is.
Ditto. Except it's Nightcafe. The results are good enough.
I've also asked GPT-3 for plot suggestions and riddles. They aren't great. It takes a bunch of time to coax halfway decent responses out of it. But it's sort of fun, so I'll probably keep doing it.
I did have Dall-E paint me a picture of “a mouse jumping a motorcycle through a flaming ring made of stone while pursued by vaguely ninja-like evil henchmen characters”
Which makes me really, really want this as a video game. Just riding the motorcycle through various environments with ninjas popping out left and right trying to grab you. Sometimes they’ve got nunchucks, sometimes nets, sometimes they swing down on a rope to get you. You get power ups too like little bombs you can throw.
But that’s the only time I used the image generation. Mostly I’ve been having GPT-4 explain history and technology to me.
I’ve been trying stable diffusion, but even with downloaded models, nothing I make looks even CLOSE to the quality of bing image creator, with the same prompts. I don’t know what I’m doing wrong.
I actually use whatever is most convenient and im not jumping between a bunch of them. bard is just the most convenient for me because of my google account.
That's pretty cool! I like the Max Headroom variants. Somehow, I think Mr Headroom in particular would approve of generative AI tech.
I'm just getting into this realm myself. I'm using ComfyUI, with SDXL 1.0 and the new LCM LoRA, but I'm really struggling to get, e.g. consistent framing. (Like, I'll ask for "full length photo of X" and get nothing but close-up headshots for a dozen images)
Frankly, I've gotten nothing but shyte from LCM on the initial image. BUT it's fantastic for upscaling img2img with a denoise of 0.1 and and Ultimate SD Upscale. Not sure how ComfyUI would do it though. I find its UX is too slow on my pc, so I stick to A1111.
But to solve your problem specifically, learn controlNet. By far my most used extension.
Just for non-serious things such as short stories that never leave the service and help coming up with names for characters and places in a story I'm writing about a pokemon region, I've been using Claude from time to time.
Otherwise I haven't been doing much with besides one Japanese translation service (Miraitranslate) that claims to use AI for translations, but that's very far and few between I use their demo thing.
I second Stable Diffusion, was using Automatic1111 to visualize characters for a script. They tend to be fairly generic, but with a few tweaks it's alright. It's mostly for brain storming for me right now since I can draw just fine and there's less legal issues if I ever was to use it as game assets, etc. Loras and neuralnets are kind of game changers, too.
Naturally for code I was using GPT 3.5 but it got kind of bad. I would upgrade but I've been a bit too lazy/cheap to look for good alternatives. Saved me a lot of training time when I needed to pick up R real quick for a contract job I had, though.