| Commit message (Expand) | Author | Age | Files | Lines | ||
|---|---|---|---|---|---|---|
| ... | ||||||
| * | Update | Volpeon | 2023-03-31 | 1 | -0/+7 | |
| * | Fix | Volpeon | 2023-03-31 | 1 | -1/+2 | |
| * | Fix | Volpeon | 2023-03-31 | 1 | -1/+1 | |
| * | Support Dadaptation d0, adjust sample freq when steps instead of epochs are used | Volpeon | 2023-03-31 | 1 | -4/+11 | |
| * | Fix | Volpeon | 2023-03-31 | 1 | -1/+2 | |
| * | Fix | Volpeon | 2023-03-28 | 1 | -1/+1 | |
| * | Support num_train_steps arg again | Volpeon | 2023-03-28 | 1 | -9/+21 | |
| * | Fix TI | Volpeon | 2023-03-27 | 1 | -8/+8 | |
| * | Fix TI | Volpeon | 2023-03-27 | 1 | -1/+10 | |
| * | Fix TI | Volpeon | 2023-03-27 | 1 | -1/+1 | |
| * | Improved inverted tokens | Volpeon | 2023-03-26 | 1 | -1/+15 | |
| * | Update | Volpeon | 2023-03-25 | 1 | -4/+10 | |
| * | Update | Volpeon | 2023-03-24 | 1 | -3/+3 | |
| * | Bring back Perlin offset noise | Volpeon | 2023-03-23 | 1 | -0/+7 | |
| * | Update | Volpeon | 2023-03-23 | 1 | -1/+1 | |
| * | Fix | Volpeon | 2023-03-22 | 1 | -4/+0 | |
| * | Log DAdam/DAdan d | Volpeon | 2023-03-21 | 1 | -2/+2 | |
| * | Added dadaptation | Volpeon | 2023-03-21 | 1 | -0/+28 | |
| * | Fixed SNR weighting, re-enabled xformers | Volpeon | 2023-03-21 | 1 | -2/+2 | |
| * | Test: https://arxiv.org/pdf/2303.09556.pdf | Volpeon | 2023-03-17 | 1 | -12/+26 | |
| * | Update | Volpeon | 2023-03-07 | 1 | -4/+4 | |
| * | Pipeline: Perlin noise for init image | Volpeon | 2023-03-04 | 1 | -1/+1 | |
| * | Removed offset noise from training, added init offset to pipeline | Volpeon | 2023-03-03 | 1 | -1/+0 | |
| * | Implemented different noise offset | Volpeon | 2023-03-03 | 1 | -2/+2 | |
| * | Update | Volpeon | 2023-03-01 | 1 | -1/+1 | |
| * | Don't rely on Accelerate for gradient accumulation | Volpeon | 2023-02-21 | 1 | -1/+1 | |
| * | Embedding normalization: Ignore tensors with grad = 0 | Volpeon | 2023-02-21 | 1 | -11/+10 | |
| * | Update | Volpeon | 2023-02-18 | 1 | -2/+4 | |
| * | Added Lion optimizer | Volpeon | 2023-02-17 | 1 | -11/+27 | |
| * | Back to xformers | Volpeon | 2023-02-17 | 1 | -3/+2 | |
| * | Remove xformers, switch to Pytorch Nightly | Volpeon | 2023-02-17 | 1 | -2/+4 | |
| * | Integrated WIP UniPC scheduler | Volpeon | 2023-02-16 | 1 | -1/+2 | |
| * | Update | Volpeon | 2023-02-15 | 1 | -0/+1 | |
| * | Update | Volpeon | 2023-02-13 | 1 | -8/+11 | |
| * | Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline | Volpeon | 2023-02-08 | 1 | -1/+1 | |
| * | Fixed Lora training | Volpeon | 2023-02-08 | 1 | -6/+6 | |
| * | Add Lora | Volpeon | 2023-02-07 | 1 | -4/+6 | |
| * | Restored LR finder | Volpeon | 2023-01-20 | 1 | -2/+19 | |
| * | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 1 | -3/+3 | |
| * | Smaller emb decay | Volpeon | 2023-01-17 | 1 | -1/+1 | |
| * | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 1 | -12/+4 | |
| * | Update | Volpeon | 2023-01-17 | 1 | -47/+62 | |
| * | Training update | Volpeon | 2023-01-16 | 1 | -3/+5 | |
| * | If valid set size is 0, re-use one image from train set | Volpeon | 2023-01-16 | 1 | -5/+1 | |
| * | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 1 | -106/+113 | |
| * | More training adjustments | Volpeon | 2023-01-16 | 1 | -5/+12 | |
| * | Handle empty validation dataset | Volpeon | 2023-01-16 | 1 | -6/+3 | |
| * | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 1 | -40/+22 | |
| * | Added Dreambooth strategy | Volpeon | 2023-01-15 | 1 | -23/+23 | |
| * | Restored functional trainer | Volpeon | 2023-01-15 | 1 | -61/+21 | |
