Model from: https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main

Trained on: https://huggingface.co/datasets/squad

For about 4500 steps (1 epoch) with a batch size of 8, 2 accumulation steps, and using LoRA adapters on all layers.

Downloads last month
76
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including gmongaras/Wizard_7B_Squad_8bit