Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-Learns-Less-and-Forgets-Less
community
AI & ML interests
None defined yet.
Organization Card
These are the model weights associated with the TMLR 2024 publication LoRA Learns Less and Forgets Less (Biderman et al. 2024). This work was done in collaboration with Databricks Mosaic AI Research.
models
17
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32
Updated
•
43
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/starcoder-full-finetuning-lr-1e-05-20B-token
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/starcoder-lora-rank-256-20B-tokens
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/openwebmath-full-finetuning-lr-1e-05-20B-tokens
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/openwebmath-lora-rank-256-20B-tokens
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/openwebmath-lora-rank-16-20B-tokens
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/starcoder-lora-rank-16-20B-tokens
Updated
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128
Updated
•
2
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512
Updated
•
2
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63dd38e5a8877129a15a12c5/Bz9FHEwWxkH8JdikuQ19A.png)
LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05
Updated
datasets
None public yet