Commit
·
bdca489
1
Parent(s):
debb252
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,6 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-

|
11 |
|
12 |
|
13 |
# Introducing Lamini, the LLM Engine for Rapid Customization
|
@@ -18,6 +17,7 @@ Today, you can try out our open dataset generator for training instruction-follo
|
|
18 |
|
19 |
[Sign up](https://lamini.ai/contact) for early access to our full LLM training module, including enterprise features like cloud prem deployments.
|
20 |
|
|
|
21 |
|
22 |
# Training LLMs should be as easy as prompt-tuning 🦾
|
23 |
Why is writing a prompt so easy, but training an LLM from a base model still so hard? Iteration cycles for finetuning on modest datasets are measured in months because it takes significant time to figure out why finetuned models fail. Conversely, prompt-tuning iterations are on the order of seconds, but performance plateaus in a matter of hours. Only a limited amount of data can be crammed into the prompt, not the terabytes of data in a warehouse.
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
|
|
10 |
|
11 |
|
12 |
# Introducing Lamini, the LLM Engine for Rapid Customization
|
|
|
17 |
|
18 |
[Sign up](https://lamini.ai/contact) for early access to our full LLM training module, including enterprise features like cloud prem deployments.
|
19 |
|
20 |
+

|
21 |
|
22 |
# Training LLMs should be as easy as prompt-tuning 🦾
|
23 |
Why is writing a prompt so easy, but training an LLM from a base model still so hard? Iteration cycles for finetuning on modest datasets are measured in months because it takes significant time to figure out why finetuned models fail. Conversely, prompt-tuning iterations are on the order of seconds, but performance plateaus in a matter of hours. Only a limited amount of data can be crammed into the prompt, not the terabytes of data in a warehouse.
|