Full Model?
Are there plans to upload the full model or no?
Thanks
good luck finding it
Might be a more official release later down the line?
Mistral full model
It’s confirmed now by the CEO of the MistralAI, this model was leaked and this is quant version of it.
Bummer... Please consider supporting this wonderful company. Any damage to their revenue stream affects what they can produce for the open source community.
That's what I do not understand.
If the model is a leaked version of Mistral's next model... it wasn't leaked only quantized, right? I mean, the leaker must have had access to the full model. Why share only the quants? I want to quant my own and eval the full model.
- This is apparently one of the early models they fine-tuned on Llama-2, so they have much better models now. (said the CEO).
- He also said, early days they shared the quantized (and watermarked) models with some customers
So, the person who shared this most likely doesn't have the full model. (I would assume, nobody would send 100GB+ model to a customer for testing!) Anyway, this is by no mean would damage their revenue. People who would host GGUF with this size, knowing it's leaked and has no license, those people were not going to pay to start with. If anything, it shows Mistral team is capable of creating good models if this is a fine-tuned of Llama-2 and it was made early days before MoE! Kudos to them.
- This is apparently one of the early models they fine-tuned on Llama-2, so they have much better models now. (said the CEO).
- He also said, early days they shared the quantized (and watermarked) models with some customers
So, the person who shared this most likely doesn't have the full model. (I would assume, nobody would send 100GB+ model to a customer for testing!) Anyway, this is by no mean would damage their revenue. People who would host GGUF with this size, knowing it's leaked and has no license, those people were not going to pay to start with. If anything, it shows Mistral team is capable of creating good models if this is a fine-tuned of Llama-2 and it was made early days before MoE! Kudos to them.
Can confirm it's definitely not as good as the current up to date Mistrial medium on Poe, but it's still very close as it's a earlier version I assume based on Llama 2. Area where it's consistently worse than the model on Poe is Japanese, but it's still better than any other open source general model to my knowledge at it's size. Really hope they release the final weights, I wonder if the final weights are MoE unlike this model. But I think there is rumors they are unfortunately going down same path as "Open"AI, who knows how true that is though. Worst comes to worst this model is quite good at least, even if it's all we get
But I think there is rumors they are unfortunately going down same path as "Open"AI, who knows how true that is though.
I doubt that's what's happening if they sent over quantized versions to other companies. They're still not done either as I think they're currently making a mistral large, at least it's what I read.