YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ScikitLLM is an LLM finetuned on writing references and code for the Scikit-Learn documentation.

Features of ScikitLLM includes:

  • Support for RAG (three chunks)
  • Sources and quotations using a modified version of the wiki syntax ("")
  • Code samples and examples based on the code quoted in the chunks.
  • Expanded knowledge/familiarity with the Scikit-Learn concepts and documentation.

Training

ScikitLLM is based on Mistral-OpenHermes 7B, a pre-existing finetune version of Mistral 7B. OpenHermes already include many desired capacities for the end use, including instruction tuning, source analysis, and native support for the chatML syntax.

As a fine-tune of a fine-tune, ScikitLLM has been trained with a lower learning rate than is commonly used in fine-tuning projects.

Downloads last month
18
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for probabl-ai/ScikitLLM-Model

Quantizations
1 model