Finetuning Overview:

Model Used: tiiuae/falcon-180B
Dataset: Databricks-dolly-15k

Dataset Insights:

The Databricks-dolly-15k dataset represents a substantial collection of over 15,000 records, curated through the dedicated and collective efforts of numerous Databricks professionals. It's meticulously designed to:

  • Enhance the magical interactivity of ChatGPT-like models.
  • Offer prompt/response pairs across eight different instruction categories, comprising the seven categories from the InstructGPT paper and an added open-ended category.
  • Ensure authenticity with restrictions against online sourcing (with the exception of Wikipedia for some categories) and the use of generative AI in crafting content.

During the dataset's creation, contributors responded to peer questions. A focus was placed on rephrasing the original queries and emphasizing accurate responses. Furthermore, certain data subsets incorporate Wikipedia references, identifiable by bracketed citation numbers like [42].

Finetuning Details:

Our finetuning harnessed the capabilities of MonsterAPI's no-code LLM finetuner:

  • Duration: The session spanned 41.7 hours.
  • Cost: The entire process cost $184.314.
  • Hardware Utilized: 2x A100 80GB GPUs.

Hyperparameters & Additional Details:

  • Model Path: tiiuae/falcon-180B
  • Learning Rate: 0.0002
  • Epochs: 1
  • Data Split: Training 90% / Validation 10%
  • Gradient Accumulation Steps: 1

Prompt Used:

### INSTRUCTION:
[instruction]

[context]

### RESPONSE:
[response]

Loss metrics

Training loss: training loss


license: apache-2.0

Downloads last month
5
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for monsterapi/Falcon_180B_dolly15k

Base model

tiiuae/falcon-180B
Adapter
(2)
this model

Dataset used to train monsterapi/Falcon_180B_dolly15k