This model is a fine-tuned version of Meta's Llama 3.2 1B Instruct model, specifically tailored for Laravel 11 documentation and related queries. It provides more accurate and concise responses to Laravel 11 questions, offering step-by-step instructions for complex queries.

Model Details

Model Description

  • Fine-tuned by: Ryan Yannelli
  • Model type: Language model fine-tuned for Laravel 11 documentation
  • Language(s) (NLP): English
  • License: Llama 3.2
  • Finetuned from model: meta-llama/Llama-3.2-1B-Instruct

Uses

Direct Use

This model is designed to assist developers with Laravel 11 related queries. It can provide quick answers to simple questions about Laravel 11 in a few sentences. For more complex questions, it offers step-by-step instructions and may ask follow-up questions for clarity.

Out-of-Scope Use

This model is specifically trained for Laravel 11 documentation and may not perform well on queries outside this domain. It should not be used for general-purpose language tasks or for documentation of other PHP frameworks or Laravel versions.

Bias, Risks, and Limitations

  • The model's knowledge is limited to Laravel 11 documentation up to October 3rd, 2024.
  • Given the small 1B parameter size, attention is not great with longer contexts.
  • The model may not perform well on tasks outside of Laravel 11 documentation.

Recommendations

Users should verify important information or code snippets with official Laravel 11 documentation. The model should be used as an assistant rather than a definitive source of information.

How to Get Started with the Model

To get started with the model locally, you can use one of the following tools:

  • LM Studio
  • Jan
  • vLLM
  • llama.cpp

These tools allow you to run the model on your local machine. Choose the one that best fits your system requirements and preferences.

Settings

Best outputs were observed with the following settings:

  • Temperature: 0.5
  • Top K Sampling: 40
  • Repeat penalty: 1.1
  • Min P Sampling: 0.05
  • Top P Sampling: 0.95

Training Details

Training Data

The model was trained on three custom datasets:

  • yannelli/laravel-11-qa
  • yannelli/laravel-11-qa-long-form
  • yannelli/laravel-11-code-samples (private)

These datasets contain Laravel 11 documentation and related question-answer pairs.

Training Procedure

The model used a multi-stage training approach, consisting of offline data and public datasets.

Training Hyperparameters

  • Training regime: SFTTrainer
  • Optimizer: AdamW 8-Bit
  • Learning Rate Scheduler Type: Cosine
  • Learning Rate: 0.000095
  • GPU: NVIDIA 3070 Ti

Environmental Impact

  • Hardware Type: NVIDIA 3070 Ti GPU
  • Hours used: 121
  • Infrastructure: Private
  • Carbon Emitted: 18.14 kg CO2 eq.

Carbon emissions were calculated using the Machine Learning Impact calculator.

Technical Specifications

Model Architecture and Objective

The model is based on the Meta Llama 3.2 1B Instruct architecture and is fine-tuned for Laravel 11 documentation tasks.

Compute Infrastructure

Hardware

NVIDIA 3070 Ti GPU

Model Card Authors

Ryan Yannelli

Downloads last month
217
GGUF
Model size
1.24B params
Architecture
llama

2-bit

4-bit

5-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for yannelli/Laravel-11-Llama-3.2-1B-Instruct-GGUF

Datasets used to train yannelli/Laravel-11-Llama-3.2-1B-Instruct-GGUF