Model Card for traclm-v2-7b-base
A Llama-2-7b finetune that has undergone additional pretraining on a dataset comprised of unclassified and publically available U.S. Army doctrine.
Model Details
Model Description
This model is a research project aimed at exploring whether a pretrained LLM can acquire tangible domain-specific knowledge about the Army domain.
- Developed by: The Research and Analysis Center - Monterey, Army Futures Command
- License: Llama-2 Community License
- Model Type: LlamaForCausalLM
- Finetuned from model: Llama-2-7b
Model Sources [optional]
- Paper: TBP
- Demo: TBP
Downstream Use
This is a raw language model that has not undergone instruction-based finetuning. Thus, output from this model is unreliable and unsuitable for downstream application. Aditional finetuning is strongly recommended.
Out-of-Scope Use
The creation of this model constitutes academic research in partnership with the Naval Postgraduate School. The purpose of this research is to inform future DoD experimentation regarding the development and application of domain-specific language models. Direct application to downstream military tasks is out of scope.
Training Details
Training Data
Link to Dataset Card TBP.
In additional to Llama-2's original pretraining data, this model has been further trained on 90M tokens sourced from unclassified U.S. Army Doctrine for 5 epochs. See below for additional details on training.
Training Procedure
The model was trained using Open Access AI Collective's Axolotl framework and Microsoft's DeepSpeed framework for model/data parallelism.
Training Hardware
Training was conducted on a single compute node with NPS's Hamming HPC Center. The compute node contained 8x NVIDIA A40 GPUs.
Training Hyperparameters
- base_model: meta-llama/Llama-2-7b-hf
- base_model_config: meta-llama/Llama-2-7b-hf
- model_type: LlamaForCausalLM
- tokenizer_type: LlamaTokenizer
- sequence_len: 4096
- pad_to_sequence_len: true
- gradient_accumulation_steps: 1
- micro_batch_size: 4
- eval_batch_size: 4
- num_epochs: 5
- lr_scheduler: cosine
- learning_rate: 0.00003
- bf16: true
- gradient_checkpointing: true
- flash_attention: true
- warmup_steps: 50
- lr_quadratic_warmup: true
- special_tokens: {bos_token: "<s>", eos_token: "</s>", unk_token: "<unk>"}
DeepSpeed Configuration
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu"
},
"contiguous_gradients": true,
"overlap_comm": true
},
"bf16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto",
"auto_cast": false,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": [
0.9,
0.999
],
"eps": 1e-8,
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
Model Card Contact
MAJ Daniel C. Ruiz ([email protected])
- Downloads last month
- 0