Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
base_model:
|
5 |
+
- meta-llama/Llama-3.1-70B-Instruct
|
6 |
+
tags:
|
7 |
+
- finance
|
8 |
+
- Llama3.1
|
9 |
+
---
|
10 |
+
# Llama-3.1-Omni-FinAI-70B Model Card
|
11 |
+
|
12 |
+
## Model Overview (Built with Llama)
|
13 |
+
Llama-3.1-Omni-FinAI-70B is a pre-trained large language model optimized for finance-specific fine-tuning applications. Based on the LLaMA 3.1 70B architecture, this model was pre-trained on 143 billion tokens of high-quality financial texts. Llama-3.1-Omni-FinAI-70B provides a foundation for further fine-tuning in specialized financial analysis tasks.
|
14 |
+
|
15 |
+
## Model Details
|
16 |
+
- **Base Model**: Llama-3.1-70B-Instruct
|
17 |
+
- **Training Data**:
|
18 |
+
- SEC 10-K, 10-Q, and 8-K filings
|
19 |
+
- Reuters News data (RCV1, TRC2)
|
20 |
+
- Finance-specific papers from Arxiv
|
21 |
+
- Financial discussions from Reddit
|
22 |
+
- Wikipedia
|
23 |
+
- **Primary Use Case**: Pre-training for finance-specific fine-tuning, allowing users to leverage Llama-3.1-Omni-FinAI-70B's foundational financial language understanding.
|
24 |
+
|
25 |
+
## Use Cases
|
26 |
+
Llama-3.1-Omni-FinAI-70B is designed as a base model for finance-specific fine-tuning tasks, supporting applications such as:
|
27 |
+
- Sentiment Analysis
|
28 |
+
- Stock Movement Prediction
|
29 |
+
- QA Instruction
|
30 |
+
- Summarization
|
31 |
+
- Predictive Financial Analysis
|
32 |
+
|
33 |
+
## Training Process
|
34 |
+
Llama-3.1-Omni-FinAI-70B was trained using the NVIDIA NeMo framework on 64 H100 GPUs, utilizing a diverse dataset that ensures robust performance for fine-tuning in finance-related applications.
|
35 |
+
|
36 |
+
## Limitations
|
37 |
+
This model is pre-trained for finance-specific fine-tuning tasks and may require additional fine-tuning for specialized applications. Due to its large size, substantial computational resources are recommended for deployment.
|
38 |
+
|
39 |
+
## License
|
40 |
+
This model is licensed under the Llama 3.1 Community License.
|
41 |
+
|
42 |
+
## Citation
|
43 |
+
If you use the Llama-3.1-Omni-FinAI-70B model, please cite as follows:
|
44 |
+
> Chiu, I-Chan, Hung, Mao-Wei, Chen, Zih-Ching, Chiu, Jun-wei, Lin, Yang-Hsien, Lee, Cheng-Kuang, Huang, Eddie TC, and See, Simon, "Omni-FinAI: Unlocking Financial Disclosure Insights" (October 30, 2024). Available at SSRN: [https://ssrn.com/abstract=5004298](https://ssrn.com/abstract=5004298)
|