update README
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
license_link: >-
|
4 |
-
https://huggingface.co/
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
@@ -16,22 +16,24 @@ tags:
|
|
16 |
- sft
|
17 |
- ggml
|
18 |
- gguf
|
19 |
-
|
20 |
datasets:
|
21 |
-
-
|
22 |
widget:
|
23 |
-
- text:
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
26 |
---
|
27 |
## Model Summary
|
28 |
This model builds on the architecture of <a href="https://huggingface.com/microsoft/phi-2">Microsoft's Phi-2</a>, incorporating the LoRA [[1]](#1) paradigm for supervised fine-tuning on a high quality question answering dataset in the insurance domain.
|
29 |
-
Thus, `
|
30 |
|
31 |
## Dataset
|
32 |
We utilise the InsuranceQA dataset [[2]](#2), which comprises 27.96K QA pairs related to the insurance domain.
|
33 |
The content of this dataset consists of questions from real world users, the answers with high quality were composed by insurance professionals with deep domain knowledge.
|
34 |
-
Since the dataset isn't available in a readable format on the web, we make it available on huggingface in a `jsonl` format, at <a href="https://huggingface.co/datasets/
|
35 |
|
36 |
|
37 |
## Usage
|
@@ -74,7 +76,7 @@ For instance:
|
|
74 |
## Evaluation
|
75 |
Coming Soon!
|
76 |
|
77 |
-
## Limitations of `
|
78 |
* Generate Inaccurate Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
|
79 |
* Unreliable Responses to Instruction: It may struggle or fail to adhere to intricate or nuanced instructions provided by users.
|
80 |
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
|
@@ -84,9 +86,9 @@ Coming Soon!
|
|
84 |
|
85 |
|
86 |
## License
|
87 |
-
The model is licensed under the [MIT license](https://huggingface.co/
|
88 |
|
89 |
|
90 |
## Citations
|
91 |
[1] <a id="1" href="https://arxiv.org/abs/2106.09685">Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).</a></br>
|
92 |
-
[2] <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015.</a>
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
license_link: >-
|
4 |
+
https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16/resolve/main/LICENSE
|
5 |
language:
|
6 |
- en
|
7 |
pipeline_tag: text-generation
|
|
|
16 |
- sft
|
17 |
- ggml
|
18 |
- gguf
|
|
|
19 |
datasets:
|
20 |
+
- deccan-ai/insuranceQA-v2
|
21 |
widget:
|
22 |
+
- text: |-
|
23 |
+
### Instruction: What is the difference between health and life insurance?
|
24 |
+
#### Response:
|
25 |
+
- text: |-
|
26 |
+
### Instruction: Does Homeowners Insurance Cover Death Of Owner?
|
27 |
+
#### Response:
|
28 |
---
|
29 |
## Model Summary
|
30 |
This model builds on the architecture of <a href="https://huggingface.com/microsoft/phi-2">Microsoft's Phi-2</a>, incorporating the LoRA [[1]](#1) paradigm for supervised fine-tuning on a high quality question answering dataset in the insurance domain.
|
31 |
+
Thus, `deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16` serves as a text generation model capable of answering questions around insurance.
|
32 |
|
33 |
## Dataset
|
34 |
We utilise the InsuranceQA dataset [[2]](#2), which comprises 27.96K QA pairs related to the insurance domain.
|
35 |
The content of this dataset consists of questions from real world users, the answers with high quality were composed by insurance professionals with deep domain knowledge.
|
36 |
+
Since the dataset isn't available in a readable format on the web, we make it available on huggingface in a `jsonl` format, at <a href="https://huggingface.co/datasets/deccan-ai/insuranceQA-v2">deccan-ai/insuranceQA-v2</a>.
|
37 |
|
38 |
|
39 |
## Usage
|
|
|
76 |
## Evaluation
|
77 |
Coming Soon!
|
78 |
|
79 |
+
## Limitations of `deccan-ai/phi-2-insurance_qa-sft-lora`
|
80 |
* Generate Inaccurate Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
|
81 |
* Unreliable Responses to Instruction: It may struggle or fail to adhere to intricate or nuanced instructions provided by users.
|
82 |
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
|
|
|
86 |
|
87 |
|
88 |
## License
|
89 |
+
The model is licensed under the [MIT license](https://huggingface.co/deccan-ai/phi-2-insurance_qa-sft-lora-gguf-f16/blob/main/LICENSE).
|
90 |
|
91 |
|
92 |
## Citations
|
93 |
[1] <a id="1" href="https://arxiv.org/abs/2106.09685">Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021).</a></br>
|
94 |
+
[2] <a id="2" href="https://ieeexplore.ieee.org/abstract/document/7404872/">Feng, Minwei, et al. "Applying deep learning to answer selection: A study and an open task." 2015 IEEE workshop on automatic speech recognition and understanding (ASRU). IEEE, 2015.</a>
|