Code Llama 70B is a new and improved version of Meta AI’s code generation model that can write code in various programming languages from natural language prompts or existing code snippets.
Anyone using the code specific models, -- how are you prompting them? Are you using any integration into vim emacs or other truly open source and offline text editor/IDE; not electron or proton based? I've compiled VS code before, but it is basically useless in that form, and the binary version sends network traffic like crazy.
Nice. Thanks. I'll save this post in case I use ollama in the future. Right now I use a codellama model and a mythomax model, but am not running them via a localhost server, just outputted in the terminal or LMStudio.
Because it doesn't call out to the internet. I even put lmstudio behind firejail to prevent it from doing so. Thusly any code I feed it (albeit pretty trivial code) doesn't add to chatgpt's overarching data set.
It still can produce usable results. It's just not as consistent. Whenever it gets into a repetitive loop, I just restart it, resetting the initial context, which generally prevents it from repeating itself, at least initially. To be fair, I've also experienced this with chatgpt, just not as often.