Samuel Lima Braz PRO

samuellimabraz

AI & ML interests

None yet

Recent Activity

liked a Space about 18 hours ago
h2oai/h2ovl-mississippi
upvoted a collection 3 days ago
H2OVL Mississippi
updated a model 3 days ago
tech4humans/yolov8s-signature-detector
View all activity

Organizations

Tech4Humans's profile picture Hugging Face Discord Community's profile picture

Posts 1

view post
Post
388
I wrote a article on Parameter-Efficient Fine-Tuning (PEFT), exploring techniques for efficient fine-tuning in LLMs, their implementations, and variations.

The study is based on the article "Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning" and the PEFT library integrated with Hugging Face's Transformers.

Article: https://huggingface.co/blog/samuellimabraz/peft-methods
Notebook: https://colab.research.google.com/drive/1B9RsKLMa8SwTxLsxRT8g9OedK10zfBEP?usp=sharing
Collection: samuellimabraz/service-summary-6793ccfe774073328ea9f8df

Analyzed methods:
- Adapters: Soft Prompts (Prompt Tuning, Prefix Tuning, P-tuning), IA³.
- Reparameterization: LoRA, QLoRA, LoHa, LoKr, X-LoRA, Intrinsic SAID, and variations of initializations (PiSSA, OLoRA, rsLoRA, DoRA).
- Selective Tuning: BitFit, DiffPruning, FAR, FishMask.

I'm starting out in generative AI, I have more experience with computer vision and robotics. Just sharing here 🤗

Articles 1

Article
12

PEFT: Parameter-Efficient Fine-Tuning Methods for LLMs

datasets

None public yet