Librarian Bot: Add base_model information to model
Browse filesThis pull request aims to enrich the metadata of your model by adding [`tiiuae/falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) as a `base_model` field, situated in the `YAML` block of your model's `README.md`.
How did we find this information? We extracted this infromation from the `adapter_config.json` file of your model.
**Why add this?** Enhancing your model's metadata in this way:
- **Boosts Discoverability** - It becomes straightforward to trace the relationships between various models on the Hugging Face Hub.
- **Highlights Impact** - It showcases the contributions and influences different models have within the community.
For a hands-on example of how such metadata can play a pivotal role in mapping model connections, take a look at [librarian-bots/base_model_explorer](https://huggingface.co/spaces/librarian-bots/base_model_explorer).
This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bot). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to [@davanstrien](https://huggingface.co/davanstrien).
If you want to automatically add `base_model` metadata to more of your modes you can use the [Librarian Bot](https://huggingface.co/librarian-bot) [Metadata Request Service](https://huggingface.co/spaces/librarian-bots/metadata_request_service)!
@@ -1,8 +1,6 @@
|
|
1 |
---
|
2 |
-
library_name: peft
|
3 |
license: apache-2.0
|
4 |
-
|
5 |
-
- iamtarun/python_code_instructions_18k_alpaca
|
6 |
tags:
|
7 |
- falcon
|
8 |
- falcon-7b
|
@@ -15,6 +13,9 @@ tags:
|
|
15 |
- copilot
|
16 |
- python coding assistant
|
17 |
- coding assistant
|
|
|
|
|
|
|
18 |
---
|
19 |
## Training procedure
|
20 |
We finetuned Falcon-7B LLM on Python-Code-Instructions Dataset ([iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)) for 10 epochs or ~ 23,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
|
|
|
1 |
---
|
|
|
2 |
license: apache-2.0
|
3 |
+
library_name: peft
|
|
|
4 |
tags:
|
5 |
- falcon
|
6 |
- falcon-7b
|
|
|
13 |
- copilot
|
14 |
- python coding assistant
|
15 |
- coding assistant
|
16 |
+
datasets:
|
17 |
+
- iamtarun/python_code_instructions_18k_alpaca
|
18 |
+
base_model: tiiuae/falcon-7b
|
19 |
---
|
20 |
## Training procedure
|
21 |
We finetuned Falcon-7B LLM on Python-Code-Instructions Dataset ([iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)) for 10 epochs or ~ 23,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
|