Discovering Locally Run Language Models: Share Your Favorites/Not So Favorites!
Let's talk about our experiences working with different models, either known or lesser-known.
Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.
The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one
I've been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It's much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it's the common factor in all the models that didn't work well for me.
W-V is supposedly trained for "USER:/ASSISTANT:" but I've found it flexible and able to work with anything that's consistent. For creative writing I'll often do "USER:/STORY:". More than two such tags also work, e.g. I did a rpg-style thing with three characters plus an omniscient narrator, by just describing each of them with their tag in the prompt, and it worked nearly flawlessly. Very impressive actually.
I've been doing RP with Wizard-Vicuna 13b Uncensored. it's good, very fast (ggml v3 q5-k-s variant) But sometimes forgets it's roleplaying and spits out a story
With a quantized GGML version you can just run on it on CPU if you have 64GB RAM. It is fairly slow though, I get about 800ms/token on a 5900X. Basically you start it generating something and come back in 30minutes or so. Can't really carry on a conversation.
I'd have to say I'm very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.
Which one is the "newer" one? Looking at the quantised releases by TheBloke, I only see one version of 30B WizardLM (in multiple formats/quantisation sizes, plus the unofficial uncensored version).