Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ language:
|
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
-
The model described by the provided code, named "Neuronovo/neuronovo-
|
9 |
|
10 |
1. **Dataset and Preprocessing**: It is trained on a dataset named "Intel/orca_dpo_pairs," which is likely a specialized dataset for dialogue systems. The data is preprocessed to format dialogues, with specific attention to system messages, user queries, chosen answers, and rejected answers.
|
11 |
|
@@ -21,7 +21,7 @@ The model described by the provided code, named "Neuronovo/neuronovo-7B-v0.1," i
|
|
21 |
|
22 |
7. **Special Features**: The use of LoRA, DPO training, and specific fine-tuning methods highlight the model's advanced capabilities in adapting large-scale language models to specific tasks or datasets while maintaining computational efficiency.
|
23 |
|
24 |
-
In summary, "Neuronovo/neuronovo-
|
25 |
|
26 |
|
27 |
|
|
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
---
|
8 |
+
The model described by the provided code, named "Neuronovo/neuronovo-9B-v0.1," is a sophisticated and fine-tuned version of a large language model, originally based on the "teknium/OpenHermes-2.5-Mistral-7B." This model exhibits several distinct characteristics and functionalities as derived from the code snippet:
|
9 |
|
10 |
1. **Dataset and Preprocessing**: It is trained on a dataset named "Intel/orca_dpo_pairs," which is likely a specialized dataset for dialogue systems. The data is preprocessed to format dialogues, with specific attention to system messages, user queries, chosen answers, and rejected answers.
|
11 |
|
|
|
21 |
|
22 |
7. **Special Features**: The use of LoRA, DPO training, and specific fine-tuning methods highlight the model's advanced capabilities in adapting large-scale language models to specific tasks or datasets while maintaining computational efficiency.
|
23 |
|
24 |
+
In summary, "Neuronovo/neuronovo-9B-v0.1" is a highly specialized, efficient, and capable large language model fine-tuned for advanced language generation tasks, particularly in the context of dialogues or interactions, leveraging cutting-edge techniques in NLP model adaptation and training.
|
25 |
|
26 |
|
27 |
|