Crystalcareai
commited on
Commit
•
30f9575
1
Parent(s):
09e3960
Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,7 @@
|
|
4 |
|
5 |
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, derived from the high-performance Arcee-SuperNova model. Built to offer exceptional instruction-following capabilities and domain-specific adaptability, this model was distilled from a 405B parameter architecture using a cutting-edge distillation pipeline. The model leverages offline logits and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring high accuracy and usability across a range of tasks.
|
6 |
|
7 |
-
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world application scenarios, offering a smaller footprint without sacrificing the power needed for demanding generative AI tasks. It’s ideal for organizations looking to harness large-scale model capabilities in a more compact, efficient format.
|
|
|
|
|
|
|
|
4 |
|
5 |
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, derived from the high-performance Arcee-SuperNova model. Built to offer exceptional instruction-following capabilities and domain-specific adaptability, this model was distilled from a 405B parameter architecture using a cutting-edge distillation pipeline. The model leverages offline logits and an instruction dataset generated with EvolKit (https://github.com/arcee-ai/EvolKit), ensuring high accuracy and usability across a range of tasks.
|
6 |
|
7 |
+
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world application scenarios, offering a smaller footprint without sacrificing the power needed for demanding generative AI tasks. It’s ideal for organizations looking to harness large-scale model capabilities in a more compact, efficient format.
|
8 |
+
|
9 |
+
# note
|
10 |
+
This readme will be edited regularly on September 10, 2024 (the day of release). After the final readme is in place we will remove this note.
|