Update README.md with license information
Browse filesHi, I'm Chen, a DevRel specialist from 01.AI.
Today I'm sending you the PR to help you update the model license, and give a recommendation according to apache-2.0.
1. License Update:
Since license of all Yi Series models has been updated from yi-license to apache-2.0, this PR is to help you update it.
License under apache-2.0 enables more free and flexible use and distribution, promoting open collaboration and innovation.
It can be a good choice to make your models widely available and provide access which is reliable and high-quality. (https://www.apache.org/licenses/LICENSE-2.0)
If it looks good to you, you can choose to update other yi derivatives (if you have) license to apache-2.0 on your own if I miss out.
2. Recommendation for Yi Derivatives:
All Yi Series models are now licensed under apache-2.0. It is recomended that Yi derivatives mention the specific Yi models they're based on in any place (e.g., in the Model Card) to align with the requirement of apache-2.0.
Thanks for your continued support and contributions to Yi models.
@@ -1,9 +1,7 @@
|
|
1 |
---
|
2 |
-
license: other
|
3 |
-
license_name: yi-license
|
4 |
-
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
|
5 |
pipeline_tag: text-generation
|
6 |
library_name: gguf
|
|
|
7 |
---
|
8 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/jondurbin/bagel-34b-v0.4
|
9 |
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
pipeline_tag: text-generation
|
3 |
library_name: gguf
|
4 |
+
license: apache-2.0
|
5 |
---
|
6 |
GGUF importance matrix (imatrix) quants for https://huggingface.co/jondurbin/bagel-34b-v0.4
|
7 |
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384).
|