Configuration Parsing Warning: In adapter_config.json: "peft.base_model_name_or_path" must be a string

Model Card for Model ID

Llama2 model fine tuned on different API's for generating test cases.

Model Details

Model Description

As a developer, testing is a major concern. Keeping this in mind, came up with this fine tuned LLaMA-2-7b on different API. Through this model you can upload your API's detail and test cases scenarios will be generated.

  • Developed by: Anish Vantagodi
  • Funded by : Kusho
  • Shared by : Anish Vantagodi
  • Model type: LlaMA-v7 Peft fine tuned
  • Language(s) (NLP): English
  • License: MIT
  • Finetuned from model [optional]: meta-llama/Llama-2-7b-hf

Uses

Used for generating and testing API's

Model Card Authors

Anish Vantagodi: https://github.com/anish2105

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float16

Framework versions

  • PEFT 0.6.2
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ace2105/llama2-api-test

Adapter
(1760)
this model