Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | Added Perlin noise to training | Volpeon | 2023-03-04 | 1 | -0/+17 | |
| | ||||||
* | Removed offset noise from training, added init offset to pipeline | Volpeon | 2023-03-03 | 1 | -10/+2 | |
| | ||||||
* | Implemented different noise offset | Volpeon | 2023-03-03 | 1 | -21/+10 | |
| | ||||||
* | Low freq noise with randomized strength | Volpeon | 2023-03-03 | 1 | -1/+8 | |
| | ||||||
* | Better low freq noise | Volpeon | 2023-03-02 | 1 | -1/+1 | |
| | ||||||
* | Changed low freq noise | Volpeon | 2023-03-01 | 1 | -23/+10 | |
| | ||||||
* | Update | Volpeon | 2023-03-01 | 1 | -23/+27 | |
| | ||||||
* | Fixed TI normalization order | Volpeon | 2023-02-21 | 1 | -4/+4 | |
| | ||||||
* | Fix | Volpeon | 2023-02-21 | 1 | -6/+3 | |
| | ||||||
* | Don't rely on Accelerate for gradient accumulation | Volpeon | 2023-02-21 | 1 | -24/+29 | |
| | ||||||
* | Embedding normalization: Ignore tensors with grad = 0 | Volpeon | 2023-02-21 | 1 | -2/+5 | |
| | ||||||
* | Update | Volpeon | 2023-02-18 | 1 | -7/+14 | |
| | ||||||
* | Added Lion optimizer | Volpeon | 2023-02-17 | 1 | -4/+5 | |
| | ||||||
* | Remove xformers, switch to Pytorch Nightly | Volpeon | 2023-02-17 | 1 | -2/+1 | |
| | ||||||
* | Fix | Volpeon | 2023-02-16 | 1 | -4/+2 | |
| | ||||||
* | Integrated WIP UniPC scheduler | Volpeon | 2023-02-16 | 1 | -8/+22 | |
| | ||||||
* | Update | Volpeon | 2023-02-15 | 1 | -1/+1 | |
| | ||||||
* | Made low-freq noise configurable | Volpeon | 2023-02-14 | 1 | -6/+11 | |
| | ||||||
* | Better noise generation during training: ↵ | Volpeon | 2023-02-13 | 1 | -0/+7 | |
| | | | | https://www.crosslabs.org/blog/diffusion-with-offset-noise | |||||
* | Update | Volpeon | 2023-02-13 | 1 | -1/+1 | |
| | ||||||
* | Fix Lora memory usage | Volpeon | 2023-02-07 | 1 | -2/+2 | |
| | ||||||
* | Add Lora | Volpeon | 2023-02-07 | 1 | -20/+11 | |
| | ||||||
* | Restored LR finder | Volpeon | 2023-01-20 | 1 | -10/+25 | |
| | ||||||
* | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 1 | -14/+14 | |
| | ||||||
* | Update | Volpeon | 2023-01-17 | 1 | -4/+8 | |
| | ||||||
* | Update | Volpeon | 2023-01-17 | 1 | -5/+14 | |
| | ||||||
* | Training update | Volpeon | 2023-01-16 | 1 | -3/+1 | |
| | ||||||
* | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 1 | -3/+14 | |
| | ||||||
* | More training adjustments | Volpeon | 2023-01-16 | 1 | -2/+3 | |
| | ||||||
* | Handle empty validation dataset | Volpeon | 2023-01-16 | 1 | -45/+56 | |
| | ||||||
* | Extended Dreambooth: Train TI tokens separately | Volpeon | 2023-01-16 | 1 | -0/+1 | |
| | ||||||
* | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 1 | -3/+4 | |
| | ||||||
* | Restored functional trainer | Volpeon | 2023-01-15 | 1 | -24/+78 | |
| | ||||||
* | Update | Volpeon | 2023-01-15 | 1 | -77/+23 | |
| | ||||||
* | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 1 | -34/+29 | |
| | ||||||
* | Added functional TI strategy | Volpeon | 2023-01-15 | 1 | -0/+118 | |
| | ||||||
* | Added functional trainer | Volpeon | 2023-01-15 | 1 | -1/+74 | |
| | ||||||
* | Update | Volpeon | 2023-01-14 | 1 | -10/+24 | |
| | ||||||
* | Update | Volpeon | 2023-01-14 | 1 | -10/+9 | |
| | ||||||
* | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 1 | -0/+365 | |