This model is a fine-tuned version of the ChatGLM3 base model using the Stanford Alpaca Dataset. The fine-tuning process utilizes scripts and files located in the ChatGLM3/finetune_basemodel_demo directory.

Steps to reproduce fine-tuning:

  1. Download the alpaca_data.json file from the Stanford Alpaca Dataset repository. (https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json)
  2. Convert alpaca_data.json to alpaca_data.jsonl format using the format_alpaca2jsonl.py in the ChatGLM3/finetune_basemodel_demo/scripts directory. Ensure the input and output paths are correctly specified.
  3. Execute the finetune_lora.sh script within the ChatGLM3/finetune_basemodel_demo/scripts directory. Make sure to set the DATASET_PATH variable to the location of your formatted dataset.

Please adhere to the licensing agreements of the Stanford Alpaca Dataset when using this model.

Downloads last month
169
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.