How to run it locally?
#1
by
thedenisnikulin
- opened
Would be super cool if this model were available in Ollama
But even if you run it locally with, say, lmstudio, zed won't be able to do copilot-like inline completion with it; you can only chat with it in sidebar.
But even if you run it locally with, say, lmstudio, zed won't be able to do copilot-like inline completion with it; you can only chat with it in sidebar.
Mhh I don't know how zed works, but you can serve an OpenAI compatible endpoint with LM Studio/Ollama.