Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,11 @@ language:
|
|
4 |
- en
|
5 |
pipeline_tag: question-answering
|
6 |
---
|
7 |
-
#
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
-
This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000
|
12 |
### Here is how to use it with texts in HuggingFace
|
13 |
```python
|
14 |
import torch
|
|
|
4 |
- en
|
5 |
pipeline_tag: question-answering
|
6 |
---
|
7 |
+
# Llama-mt-lora
|
8 |
|
9 |
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
|
11 |
+
This model is fine-tuned with LLaMA with 8 Nvidia A100-80G GPUs using 3,000,000 groups of conversations in the context of mathematics by students and facilitators on Algebra Nation (https://www.mathnation.com/). Llama-mt-lora consists of 32 layers and over 7 billion parameters, consuming up to 13.5 gigabytes of disk space. Researchers can experiment with and finetune the model to help construct math conversational AI that can effectively respond generation in a mathematical context.
|
12 |
### Here is how to use it with texts in HuggingFace
|
13 |
```python
|
14 |
import torch
|