@[email protected], as far as I can tell, you always use Bing Image Creator.
And as far as I can tell, @[email protected] always uses Midjourney.
I don't use either. But as far as I know, neither service currently charges for generation of images. I don't know if there's some sort of different rate-limit that favors one over the other, or another reason to use Bing (perhaps Midjourney's model is intentionally not trained on Sailor Moon?), but I do believe that Midjourney can do a few things that Bing doesn't.
One of those is inpainting. Inpainting, for those who haven't used it, lets one start with an existing image, create a mask that specifies that only part of the image should be regenerated, and then regenerate that part of the image using a specified prompt (which might differ from the prompt used to generate the image as a whole). I know that Thelsim's used this feature before with Midjourney, because she once used it to update an image with some sort of poison witch image with hands over a green glowing pot, so I'm pretty sure that it's available to Midjourney general users.
I know that you recently expressed frustration with Bing's Image Creator's current functionality, wanted more.
Inpainting's time-consuming, but it can let a lot of images be rescued, rather than having to just re-reroll the whole image. Have you tried using Midjourney? Was there anything there that you found made it not acceptable?
The inpainting has improved a lot since then. Recently they introduced an external editor that allows you to do more accurate inpainting and even retexturing.
For example, taking one of the images here.
With retexturing I can write:
A 1900s photograph, of sailor moon and politicians and a xenomorph, in congress
And have it transformed while keeping the original characters:
There's also the option to repaint:
And to expand the image:
But things it doesn't do well is accurate stuff, like flags, characters, that kind of thing. It likes to hallucinate a little so, for example, you won't get a perfect flag. And even a sailor moon will often look a bit off-brand.
Thanks for trying it out! Both the inpainting and outpainting -- the expansion -- worked better than I'd expected, though I dunno if that's exactly what M0oP0o's after.
I have tried midjourney before. The results where..... Underwhelming. Lots of odd artifacting, slow creation time and yes it had some issues with sailormoon.
I might try again, as it has been a while. It would be nice to have more control.
Oh I also tried local generation (forgot the name) and wooooow is my local PC bad at pictures (clearly can't be my lack of ability it setting it up).
Lots of odd artifacting, slow creation time and yes it had some issues with sailormoon.
It probably isn't worth the effort for most things, but one option might also be -- and I'm not saying that this will work well, but a thought -- using both. That is, if Bing Image Creator can generate images with content that you want but gets some details wrong and can't do inpainting, but Midjourney can do inpainting, it might be possible to take a Bing-generated image that's 90% of what you want and then inpaint the particular detail at issue using Midjourney. The inpainting will use the surrounding image as an input, so it should tend to try to generate similar image.
I'd guess that the problem is that an image generated with one model probably isn't going to be terribly stable in another model -- like, it probably won't converge on exactly the same thing -- but it might be that surrounding content is enough to hint it to do the right thing, if there's enough of that context.
I mean, that's basically -- for a limited case -- how AI upscaling works. It gets an image that the model didn't generate, and then it tries to generate a new image, albeit with only slight "pressure" to modify rather than retain the existing image.
It might produce total garbage, too, but might be worth an experiment.
What I'd probably try to do if I were doing this locally is to feed my starting image into the thing to generate prompt terms that my local model can use to generate a similar-looking image, and include those when doing inpainting, since those prompt terms will be adapted to trying to create a reasonably-similar image using the different model. On Automatic1111, there's an extension called Clip Interrogator that can do this ("image to text").
Searching online, it looks like Midjourney has similar functionality, the /describe command.
It's not magic -- I mean, end of the day, the model can only do what it's been trained on -- but I've found that to be helpful locally, since I'd bet that Bing and Midjourney expect different prompt terms for a given image.
Oh I also tried local generation (forgot the name) and wooooow is my local PC bad at pictures (clearly can’t be my lack of ability it setting it up).
Hmm. Well, that I've done. Like, was the problem that it was slow? I can believe it, but just as a sanity check, if you run on a CPU, pretty much everything is mind-bogglingly slow. Do you know if you were running it on a GPU, and if so, how much VRAM it has? And what you were using (like, Stable Diffusion 1.5, Stable Diffusion XL, Flux, etc?)