--- datasets: - mahiatlinux/Reflection-Dataset-ShareGPT-v2 base_model: microsoft/Phi-3.5-mini-instruct --- # Shiny-Phi3.5 **Shiny-Phi3.5** is a reflection fine-tune of Phi3.5 using mahiatlinux's dataset. Recently "Reflection 70B" drew a lot of attention after making claims of massive performance gains via reflection tuning. However, independent testing has been unable to reproduce these results. I was curious to try it myself, so I made this model. If you'd like to try a smaller reflection model for yourself, or just one that's not associated with the original - then here you go! **What is reflection?** Reflection fine-tuning guides the model to generate a plan, and then reflect on the plan before proceeding to the final output. A similar approach has been used by Claude: instructing the model to plan and reflect via system prompts. Reflection tuning "bakes in" the behavior.