File size: 3,189 Bytes
3f0509b
 
 
c7465c7
3f0509b
 
 
 
 
 
 
c7465c7
 
3f0509b
 
 
 
 
5764de9
3f0509b
ce0a926
 
1adadea
3f0509b
5764de9
 
3788cde
 
 
3882001
3788cde
 
 
 
 
 
 
5764de9
3788cde
84cb13d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f0509b
 
c7465c7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
datasets:
- zjunlp/Mol-Instructions
---

# Uploaded  model

- **Developed by:** kevinkawchak
- **License:** llama3
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
- **Finetuned using dataset :** zjunlp/Mol-Instructions
- **Dataset identification:** Molecule-oriented Instructions
- **Dataset function:** Description guided molecule design

[Cover Image](https://drive.google.com/file/d/1J-spZMzLlPxkqfMrPxvtMZiD2_hfcGyr/view?usp=sharing), [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. 
<br>

A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy. 

The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)

References:
1) unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
2) zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
3) github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
4) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
5) hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04
<br>

@inproceedings{fang2023mol, <br>
  author       = {Yin Fang and<br>
                  Xiaozhuan Liang and<br>
                  Ningyu Zhang and<br>
                  Kangwei Liu and<br>
                  Rui Huang and<br>
                  Zhuo Chen and<br>
                  Xiaohui Fan and<br>
                  Huajun Chen},<br>
  title        = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset<br>
                  for Large Language Models},<br>
  booktitle    = {{ICLR}},<br>
  publisher    = {OpenReview.net},<br>
  year         = {2024},<br>
  url          = {https://openreview.net/pdf?id=Tlsdsb6l9n}}<br>

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)