David Korn

DaveK23
ยท

AI & ML interests

None yet

Recent Activity

liked a Space 22 days ago
ginigen/Multi-LoRAgen
updated a collection about 1 month ago
Audiovisual
View all activity

Organizations

None yet

DaveK23's activity

New activity in Yntec/blitz_diffusion 26 days ago

Broken models [SOLVED]

6
#5 opened 27 days ago by
rockguard
reacted to tegridydev's post with โค๏ธ about 2 months ago
view post
Post
1920
WTF is Fine-Tuning? (intro4devs)

Fine-tuning your LLM is like min-maxing your ARPG hero so you can push high-level dungeons and get the most out of your build/gear... Makes sense, right? ๐Ÿ˜ƒ

Here's a cheat sheet for devs (but open to anyone!)

---

TL;DR

- Full Fine-Tuning: Max performance, high resource needs, best reliability.
- PEFT: Efficient, cost-effective, mainstream, enhanced by AutoML.
- Instruction Fine-Tuning: Ideal for command-following AI, often combined with RLHF and CoT.
- RAFT: Best for fact-grounded models with dynamic retrieval.
- RLHF: Produces ethical, high-quality conversational AI, but expensive.

Choose wisely and match your approach to your task, budget, and deployment constraints.

I just posted the full extended article here
if you want to continue reading >>>

https://huggingface.co/blog/tegridydev/fine-tuning-dev-intro-2025