kevinkawchak commited on
Commit
5764de9
1 Parent(s): e12fb78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -16,13 +16,14 @@ datasets:
16
  # Uploaded model
17
 
18
  - **Developed by:** kevinkawchak
19
- - **License:** apache-2.0
20
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
21
  - **Finetuned using dataset :** zjunlp/Mol-Instructions
22
  - **Dataset identification:** Molecule-oriented Instructions
23
  - **Dataset function:** Description guided molecule design
24
 
25
- [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing) <br>
 
26
 
27
  A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
28
 
@@ -34,6 +35,7 @@ References:
34
  3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
35
  4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
36
  5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
 
37
 
38
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
39
 
 
16
  # Uploaded model
17
 
18
  - **Developed by:** kevinkawchak
19
+ - **License:** llama3
20
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
21
  - **Finetuned using dataset :** zjunlp/Mol-Instructions
22
  - **Dataset identification:** Molecule-oriented Instructions
23
  - **Dataset function:** Description guided molecule design
24
 
25
+ [Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing), [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3.
26
+ <br>
27
 
28
  A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.
29
 
 
35
  3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
36
  4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
37
  5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
38
+ <br>
39
 
40
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
41