OK to upload quantized versions to Ollama library?
It's a lot easier to run models locally using Ollama, both due to the way that tool lets you download and run models and because models in the Ollama library are quantized by default. It's the main tool I recommend others to check out when they want to use LLMs locally.
Do you have any problem with me quantizing the model and uploading it to the Ollama library at https://ollama.com/library? Asking because this model is behind a custom license here on HF.
For reference, the Llama family of models from Meta follow the same pattern: License agreement that has to be accepted on Huggingface and models distributed freely through other platforms in addition. With the license included but not enforced on those other platforms. See Llama 3.1 in Ollama library as an example: https://ollama.com/library/llama3.1/blobs/0ba8f0e314b4
this could be troublesome with some of the licensing they state in the readme:
"4.2 Geographical Restriction
Only organizations having residence or registered main office in the Nordic countries (Denmark, Norway, Sweden, Finland, and Iceland) are permitted to use the Work or Derivative Works thereof without additional consent, provided they comply with the other terms of this License."
As there would be no way to control this by uploading it to ollama. Having a local gguf would likely not be an issue whatsoever, by doing it with llama.cpp on your machine, for example.
Yes that's essentially what I'm asking about here. Does NorwAI have any intention to enforce these criteria? Because IMHO that's pointless and not worth spending resources on. And then open distribution with a LICENSE.md file included should be perfectly fine. They can always go after companies that violate the license later if some company ends up profiting big while in breach. Just like with software licensing in general.
And in the meantime the tinkerers and individual contributors would have an easier time trying out these models since they would be easier to spin up. Which I have to believe is in the best interest of NorwAI.
@svein-ek There are five quantized GGUF-files in the repository that works with Ollama, if you follow their "Import from GGUF"-procedure.
Hi @arnes ,
Thank you for your question! I agree with @tollefj that we need to be very cautious about access control for our models, especially given the restricted datasets and agreements with our partners. Uploading the models to external platforms could increase the risk of unauthorized dissemination. But as @Gardenberg pointed out that you could try GGUF version of some of our models.Thank you for your understanding.
@svein-ek There are five quantized GGUF-files in the repository that works with Ollama, if you follow their "Import from GGUF"-procedure.
@Gardenberg
Thanks for the reply, but I was thinking more about the parameters used in the modelfile. Importing the model to Ollama isn't the problem, but getting the model to respond sensibly is more difficult. I looked at the model file for Mistral v0.1 with "ollama show --modelfile," and even though I used the same TEMPLATE and PARAMETER configurations, the model responds quite hallucinatory (it thinks it's a 13-year-old girl). I have also tried with a temperature of 0.1 and a SYSTEM prompt stating that it should be helpful and concise, but it rambles on... It would be nice if someone could share a working model file.
Unfortunately, this doesn't work:
FROM normistral-7b.Q8_0.gguf
#TEMPLATE "[INST] {{ .System }} {{ .Prompt }} [/INST]
#"
#PARAMETER stop [INST]
#PARAMETER stop [/INST]
PARAMETER temperature 0.1
SYSTEM Du er en behjelpelig LLM model som svarer kort og konsist på spørsmål
@svein-ek I see. Sadly, I have the samme issues, and would be interested in an actually working example as well.
Have anyone figured out a working modelvfile the last 11 days? I have the same experiences as you. No fun with models if they do not work.. :/