Safetensors
qwen2

How to run it locally?

#1
by thedenisnikulin - opened

Would be super cool if this model were available in Ollama

  1. In the model tree click section of the model page click "quantizations"
  2. Select a model
  3. Click Use this model from the quantized model page

image.png

But even if you run it locally with, say, lmstudio, zed won't be able to do copilot-like inline completion with it; you can only chat with it in sidebar.

But even if you run it locally with, say, lmstudio, zed won't be able to do copilot-like inline completion with it; you can only chat with it in sidebar.

Mhh I don't know how zed works, but you can serve an OpenAI compatible endpoint with LM Studio/Ollama.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment