metadata
base_model: tiiuae/Falcon3-10B-Instruct
tags:
- fluently-lm
- fluently-sets
- demo
- reasoning
- thinking
- text-generation-inference
- transformers
- unsloth
- falcon3
- falcon
- llama
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- fluently-sets/ultrathink
pipeline_tag: text-generation
FalconThink3-10B Demo (Finetune of Falcon3-10B-IT on Ultrathink dataset)
Q4_K_M GGUF-quant available here
This is SFT-finetune Falcon3-10B-IT on Ultrathink dataset. This is far from a perfect model, its main purpose is to show an example of using the dataset.
- Base model: tiiuae/Falcon3-10B-Instruct
- Model type: LlamaForCausalLM
- Number of parameters: 10.3B
- Precision: FP16
- Training method: SFT
- Training dataset: fluently-sets/ultrathink
- Languages: English (mostly)
Trained by Fluently Team (@ehristoforu) with Unsloth AI with love🥰