EDIT: it seems many people don't know what a "system prompt" is. that's understandable and totally normal <3
here's a short explanation (CW: ai shid, but written by me)
The system prompt is what tells an LLM (Large Language Model) like ChatGPT and Llama how to behave, what to believe and what to expect from the user. So "rewriting peoples system prompts" means: overriding peoples views of me.
with this context, the funipic should be more understandable, where the two text boxes represent peoples "system prompts" before and after my potential transition.
feel free to ask stuff in the comments or message me. i care somewhat about this ai stuff so yea (but i obv don't like peeps using it to generate dum meaningless articles and pictures)
thanksies!! <3 i'm working on it! keep thinking eh this isn't so bad but then i see real girlies in real life or trans cuties on the not-real-life internet and i go aaah right, now i remember: bad feels exist.. i have my next therapy session on the 16th and imma prepare some stuffs to say.. and then probably forget about them or not feel strong enough to break out chain of words and say my thingsies ...
igottastopmakingpostswhereyoufeeltheneedtomakemefeelbetter>~<
also - ! how are you moderator here - ! - - ! TWICE!!!!=!=?? - ?!
notthatimindseeingutwicebut why are u here twice? isn't one version of u enough? or do u do that so u can have funi comment-conversations with urself so people have some funi to look at? (i like that idea :) )
I switched over to mostly using a blahaj.zone account, but didn't remove my .world account. I figured it might be helpful for one of the mods to see downvotes, but the whole account seems glitched, so I'll probably end up removing it at some point.
Edit: also, you could write down what you want to say so you don't get sidetracked in session. It's helped me stay on track :)
you are right, this meme connect two somewhat distance fields: cute transgender peeps <3 and boring ai stuff :(
the "system prompt" is the prompt you give an LLM (such as GPT4, Llama and Gemini) which tells it how to behave. It usually ends up making the model believe what is in the system prompt, is true.
In other words, the "rewriting peoples system prompt" means: changing peoples immediate recognition of me as a person and making them believe that this is truth, just like how googles AI believes putting glue on pizza makes it tastier, because some reddit post told it so.