Train almost any model on a variety of tasks such as llm finetuning, text classification/regression, summarization, question answering, image classification/regression, object detection, tabular data, etc for FREE using AutoTrain locally. π₯ https://github.com/huggingface/autotrain-advanced
INTRODUCING Hugging Face AutoTrain Client π₯ Fine-tuning models got even easier!!!! Now you can fine-tune SOTA models on all compatible dataset-model pairs on Hugging Face Hub using Python on Hugging Face Servers. Choose from a number of GPU flavors, millions of models and dataset pairs and 10+ tasks π€
To try, install autotrain-advanced using pip. You can ignore dependencies and install without --no-deps and then you'd need to install some dependencies by hand.
π¨ NEW TASK ALERT π¨ Extractive Question Answering: because sometimes generative is not all you need π AutoTrain is the only open-source, no code solution to offer so many tasks across different modalities. Current task count: 23 π Check out the blog post on getting started with this task: https://huggingface.co/blog/abhishek/extractive-qa-autotrain
π¨ NEW TASK ALERT π¨ π AutoTrain now supports Object Detection! π Transform your projects with these powerful new features: πΉ Fine-tune any supported model from the Hugging Face Hub πΉ Seamless logging with TensorBoard or W&B πΉ Support for local and hub datasets πΉ Configurable training for tailored results πΉ Train locally or leverage Hugging Face Spaces πΉ Deployment-ready with API inference or Hugging Face endpoints AutoTrain: https://hf.co/autotrain
The first open Stable Diffusion 3-like architecture model is JUST out π£ - but it is not SD3! π€
It is Tencent-Hunyuan/HunyuanDiT by Tencent, a 1.5B parameter DiT (diffusion transformer) text-to-image model πΌοΈβ¨, trained with multi-lingual CLIP + multi-lingual T5 text-encoders for english π€ chinese understanding
ππππ Introducing AutoTrain Configs! ππππ Now you can train models using yaml config files! π₯ These configs are easy to understand and are not at all overwhelming. So, even a person with almost zero knowledge of machine learning can train state of the art models without writing any code. Check out example configs in the config directory of autotrain-advanced github repo and feel free to share configs by creating a pull request π€ Github repo: https://github.com/huggingface/autotrain-advanced
Trained another version of llama3-8b-instruct which beats the base model. This time without losing too many points on gsm8k benchmark. Again, using AutoTrain π₯ pip install autotrain-advanced Trained model: abhishek/autotrain-llama3-orpo-v2
With AutoTrain, you can already finetune the latest llama3 models without writing a single line of code. Here's an example finetune of llama3 8b model: abhishek/autotrain-llama3-no-robots
π 3 text-encoders: 2 CLIPs, one T5-XXL; plug-and-play: removing the larger one maintains competitiveness
ποΈ Dataset was deduplicated with SSCD which helped with memorization (no more details about the dataset tho)
Variants π A DPO fine-tuned model showed great improvement in prompt understanding and aesthetics βοΈ An Instruct Edit 2B model was trained, and learned how to do text-replacement
Results β State of the art in automated evals for composition and prompt understanding β Best win rate in human preference evaluation for prompt understanding, aesthetics and typography (missing some details on how many participants and the design of the experiment)