YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Information

Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.

Quantized using --true-sequential and --groupsize 128 optimizations.

This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/alpaca-lora-30b

Update 04.06.2023

This is a more recent merge of Chansung's Alpaca Lora which was updated using the clean alpaca dataset as of 04/06/2023 with refined training parameters

Training Parameters

  • num_epochs=10
  • cutoff_len=512
  • group_by_length
  • lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'
  • lora_r=16
  • micro_batch_size=8

Benchmarks

Wikitext2: 4.473957061767578

Ptb-New: 8.682597160339355

C4-New: 6.517213344573975

Note: This version uses --groupsize 128, resulting in better evaluations. However, it consumes more VRAM.

Downloads last month
22
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.