LLAMA 2 7B Fine tuned on Suchinthana/databricks-dolly-15k-sinhala dataset. Used 3000 datapoints for finetunning and ran for 200 steps (~1.01 epochs).

Downloads last month
17
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train Suchinthana/databricks-dolly-15k-sinhala