Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

Model Card for Model ID

TinyLlama/TinyLlama-1.1B-Chat-v1.0 sft on alpaca dataset using LoRA

Model Details

Model Sources [optional]

Training Details

Training Procedure

Training Hyperparameters

  • Training regime: [fp16 mixed precision]
  • Per device train batch size: 4
  • Epoch: 10
  • Loss: 0.9044

Framework versions

  • PEFT 0.7.1
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bytebarde/TinyLlama-sft-lora-alpaca

Adapter
(598)
this model

Dataset used to train bytebarde/TinyLlama-sft-lora-alpaca