TinyLlama-CPT / readme.md
sonalsannigrahi's picture
Upload 6 files (#2)
a4f8917 verified

Steps to run continued pretraining

  1. Install the environment from as given in multilinguality_megatron/Readme.md

  2. Run the following commands

conda activate towerllm-env
bash multilinguality_megatron/convert2megatron.sh
bash multilinguality_megatron/model_sharding.sh
bash multilinguality_megatron/continue_pretraining.sh

Arguments to take care of:

convert2megatron.sh 
        --megatron_model: Path where the megatron weights are to be saved
        --model: Path of huggingface model (KshitijAmbilduke/extended_non_uniform_model_tinyllama)
        --size: 1 (for TinyLlama)
        --repo: Location of the multilingual megatron repository


model_sharding.sh
        --megatron_model: Path where the megatron weights are saved
        --sharded_model: Path of folder to save shards of the model 
        --tp: Number of shards to create. (Number of shards == Number of GPUs used)
        --vocab_size: 37005 (32000+5005)


continue_pretraining.sh
        --data_path="1 data/data_text_document"
        megatron_model: Path of folder containing sharded model
        model_dir: Path for folding storing checkpoints
        tokenizer_path: Path of extended tokenizer
        tp: number of shards