Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Improved automation caps | Volpeon | 2023-04-16 | 3 | -6/+25 |
| | |||||
* | Fix | Volpeon | 2023-04-15 | 1 | -19/+0 |
| | |||||
* | TI via LoRA | Volpeon | 2023-04-15 | 2 | -7/+6 |
| | |||||
* | Update | Volpeon | 2023-04-13 | 3 | -11/+5 |
| | |||||
* | Update | Volpeon | 2023-04-10 | 3 | -3/+3 |
| | |||||
* | Fix sample gen: models sometimes weren't in eval mode | Volpeon | 2023-04-10 | 3 | -28/+26 |
| | |||||
* | Update | Volpeon | 2023-04-10 | 3 | -3/+3 |
| | |||||
* | Update | Volpeon | 2023-04-09 | 1 | -1/+1 |
| | |||||
* | Update | Volpeon | 2023-04-08 | 2 | -7/+7 |
| | |||||
* | Fix TI | Volpeon | 2023-04-08 | 1 | -1/+2 |
| | |||||
* | Fix | Volpeon | 2023-04-08 | 1 | -3/+2 |
| | |||||
* | Update | Volpeon | 2023-04-08 | 3 | -5/+11 |
| | |||||
* | Fixed Lora PTI | Volpeon | 2023-04-07 | 1 | -16/+19 |
| | |||||
* | Fix | Volpeon | 2023-04-07 | 3 | -13/+8 |
| | |||||
* | Fix | Volpeon | 2023-04-07 | 2 | -6/+8 |
| | |||||
* | Update | Volpeon | 2023-04-07 | 1 | -1/+36 |
| | |||||
* | TI: Bring back old embedding decay | Volpeon | 2023-04-04 | 1 | -1/+21 |
| | |||||
* | Improved sparse embeddings | Volpeon | 2023-04-03 | 1 | -4/+4 |
| | |||||
* | TI: Delta learning | Volpeon | 2023-04-03 | 1 | -23/+0 |
| | |||||
* | Lora: Only register params with grad to optimizer | Volpeon | 2023-04-02 | 2 | -5/+0 |
| | |||||
* | Revert | Volpeon | 2023-04-01 | 1 | -19/+81 |
| | |||||
* | Fix | Volpeon | 2023-04-01 | 1 | -1/+3 |
| | |||||
* | Combined TI with embedding and LoRA | Volpeon | 2023-04-01 | 1 | -58/+18 |
| | |||||
* | Experimental: TI via LoRA | Volpeon | 2023-04-01 | 1 | -26/+4 |
| | |||||
* | Fix TI | Volpeon | 2023-03-27 | 1 | -8/+10 |
| | |||||
* | Sparse TI embeddings without sparse tensors | Volpeon | 2023-03-27 | 1 | -10/+8 |
| | |||||
* | Improved TI embeddings | Volpeon | 2023-03-26 | 1 | -2/+1 |
| | |||||
* | Fixed Lora training perf issue | Volpeon | 2023-03-24 | 1 | -7/+8 |
| | |||||
* | Lora fix: Save config JSON, too | Volpeon | 2023-03-24 | 1 | -0/+3 |
| | |||||
* | Refactoring, fixed Lora training | Volpeon | 2023-03-24 | 3 | -58/+30 |
| | |||||
* | Update | Volpeon | 2023-03-23 | 3 | -12/+12 |
| | |||||
* | Fixed SNR weighting, re-enabled xformers | Volpeon | 2023-03-21 | 1 | -11/+59 |
| | |||||
* | Update | Volpeon | 2023-03-07 | 1 | -14/+11 |
| | |||||
* | Update | Volpeon | 2023-03-01 | 2 | -4/+4 |
| | |||||
* | Fixed TI normalization order | Volpeon | 2023-02-21 | 2 | -11/+15 |
| | |||||
* | Don't rely on Accelerate for gradient accumulation | Volpeon | 2023-02-21 | 1 | -6/+0 |
| | |||||
* | Embedding normalization: Ignore tensors with grad = 0 | Volpeon | 2023-02-21 | 1 | -4/+11 |
| | |||||
* | Remove xformers, switch to Pytorch Nightly | Volpeon | 2023-02-17 | 3 | -7/+7 |
| | |||||
* | Update | Volpeon | 2023-02-13 | 2 | -2/+2 |
| | |||||
* | Fixed Lora training | Volpeon | 2023-02-08 | 1 | -18/+5 |
| | |||||
* | Fix Lora memory usage | Volpeon | 2023-02-07 | 3 | -7/+1 |
| | |||||
* | Add Lora | Volpeon | 2023-02-07 | 3 | -17/+203 |
| | |||||
* | Restored LR finder | Volpeon | 2023-01-20 | 2 | -6/+3 |
| | |||||
* | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 2 | -2/+34 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 2 | -9/+10 |
| | |||||
* | Fix | Volpeon | 2023-01-17 | 1 | -4/+5 |
| | |||||
* | Fix | Volpeon | 2023-01-17 | 1 | -1/+0 |
| | |||||
* | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 1 | -9/+5 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 2 | -8/+8 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 2 | -8/+21 |
| |