kevinkawchak's picture
Update README.md
340e7e0 verified
|
raw
history blame
6.35 kB
metadata
language:
  - en
license: llama3
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
base_model: nvidia/Llama3-ChatQA-1.5-8B
datasets:
  - zjunlp/Mol-Instructions
  • Developed by: kevinkawchak
  • License: llama3
  • Finetuned from model : nvidia/Llama3-ChatQA-1.5-8B
  • Finetuned using dataset : zjunlp/Mol-Instructions, cc-by-4.0
  • Dataset identification: Molecule-oriented Instructions
  • Dataset function: Description guided molecule design

May 07, 2024: Additional Fine-tunings, Built with Meta Llama 3

  1. gradientai/Llama-3-8B-Instruct-Gradient-1048k Model
    Llama 3 8B update: 1040K context length from 8K, and highest RAM consumption
    "What is the structure for adenine?" Verbose SELFIES structure, but logical
    Fine-tuned on Mol-Instructions, float16, GitHub, 610 seconds, A100 40GB

  2. NousResearch/Hermes-2-Pro-Llama-3-8B Model
    Llama 3 8B update: Cleaned OpenHermes 2.5, new Function Calling, JSON Mode dataset
    "What is the structure for adenine?" Concise SELFIES structure, but less logical
    Fine-tuned on Mol-Instructions, float16, GitHub, 599 seconds, A100 40GB

  3. nvidia/Llama3-ChatQA-1.5-8B Model
    Llama 3 8B update: ChatQA-1.5 to enhance tabular and arithmetic calculation capability
    "What is the structure for adenine?" Verbose SELFIES structure and less logical
    Fine-tuned on Mol-Instructions, float16, GitHub, 599 seconds, A100 40GB

Responses were verified against the Wikipedia Adenine SMILES format and a SMILES to SELFIES python notebook estimated generator.
Fine-tunings were performed using the Apache-2.0 unsloth 'Alpaca + Llama-3 8b full example' Colab notebook.

Primary Study

The following are modifications or improvements to original notebooks. Please refer to the authors' models for the published primary work. Cover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3.

A 4-bit quantization of Meta-Llama-3-8B-Instruct was used to reduce training memory requirements when fine-tuning on the zjunlp/Mol-Instructions dataset. (1-2) In addition, the minimum LoRA rank value was utilized to reduce the overall size of created models. In specific, the molecule-oriented instructions description guided molecule design was implemented to answer general questions and general biochemistry questions. General questions were answered with high accuracy, while biochemistry related questions returned 'SELFIES' structures but with limited accuracy.

The notebook featured Torch and Hugging Face libraries using the Unsloth llama-3-8b-Instruct-bnb-4bit quantization model. Training loss decreased steadily from 1.97 to 0.73 over 60 steps. Additional testing regarding the appropriate level of compression or hyperparameter adjustments for accurate SELFIES chemical structures outputs is relevant, as shown in the GitHub notebook for research purposes (3). A 16-bit and reduced 4-bit size were uploaded to Hugging Face. (4-5)

Update 04/24: The number of training steps were increased to further decrease loss, while maintaining reduced memory requirements through quantization and reduced size through LoRA. This allowed for significantly improved responses to biochemistry related questions, and were saved at the following LLM Model sizes: 8.03B, 4.65B. github.

References:

  1. unsloth: https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit
  2. zjunlp: https://huggingface.co/datasets/zjunlp/Mol-Instructions
  3. github: https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/blob/main/Code/Drug%20Discovery/Meta-Llama-3/Meta-Llama-3-8B-Instruct-Mol.ipynb
  4. hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol16
  5. hugging face: https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-Mol04

@inproceedings{fang2023mol,
author = {Yin Fang and
Xiaozhuan Liang and
Ningyu Zhang and
Kangwei Liu and
Rui Huang and
Zhuo Chen and
Xiaohui Fan and
Huajun Chen},
title = {Mol-Instructions: {A} Large-Scale Biomolecular Instruction Dataset
for Large Language Models},
booktitle = {{ICLR}},
publisher = {OpenReview.net},
year = {2024},
url = {https://openreview.net/pdf?id=Tlsdsb6l9n}}

This llama model was trained with Unsloth and Huggingface's TRL library.