Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
Zaid Ahmad Awan
zaidawan
Follow
AI & ML interests
None yet
Recent Activity
updated
a model
16 days ago
zaidawan/unsloth_Meta-Llama-3.1-8B-Instruct-bnb-4bit_Adapter
replied
to
smangrul
's
post
16 days ago
🚨 New Release of 🤗PEFT! 1. New methods for merging LoRA weights. Refer this HF Post for more details: https://huggingface.co/posts/smangrul/850816632583824 2. AWQ and AQLM support for LoRA. You can now: - Train adapters on top of 2-bit quantized models with AQLM - Train adapters on top of powerful AWQ quantized models Note for inference you can't merge the LoRA weights into the base model! 3. DoRA support: Enabling DoRA is as easy as adding `use_dora=True` to your `LoraConfig`. Find out more about this method here: https://arxiv.org/abs/2402.09353 4. Improved documentation, particularly docs regarding PEFT LoRA+DeepSpeed and PEFT LoRA+FSDP! 📄 Check out the docs at https://huggingface.co/docs/peft/index. 5. Full Release Notes: https://github.com/huggingface/peft/releases/tag/v0.9.0
new
activity
16 days ago
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4:
Finetuning AWQ model
View all activity
Organizations
None yet
spaces
1
pinned
Sleeping
✍
My Argilla Public
models
2
Sort: Recently updated
zaidawan/unsloth_Meta-Llama-3.1-8B-Instruct-bnb-4bit_Adapter
Updated
16 days ago
zaidawan/unsloth_test_llama3_8B_finetuned_adapter
Updated
20 days ago
datasets
2
Sort: Recently updated
zaidawan/synthetic_agents
Viewer
•
Updated
19 days ago
•
27.2k
•
14
zaidawan/dataset
Updated
19 days ago
•
2