Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Remove xformers, switch to Pytorch Nightly | Volpeon | 2023-02-17 | 4 | -9/+8 |
| | |||||
* | Fix | Volpeon | 2023-02-16 | 1 | -4/+2 |
| | |||||
* | Integrated WIP UniPC scheduler | Volpeon | 2023-02-16 | 1 | -8/+22 |
| | |||||
* | Update | Volpeon | 2023-02-15 | 1 | -1/+1 |
| | |||||
* | Made low-freq noise configurable | Volpeon | 2023-02-14 | 1 | -6/+11 |
| | |||||
* | Better noise generation during training: ↵ | Volpeon | 2023-02-13 | 1 | -0/+7 |
| | | | | https://www.crosslabs.org/blog/diffusion-with-offset-noise | ||||
* | Update | Volpeon | 2023-02-13 | 3 | -3/+3 |
| | |||||
* | Fixed Lora training | Volpeon | 2023-02-08 | 1 | -18/+5 |
| | |||||
* | Fix Lora memory usage | Volpeon | 2023-02-07 | 4 | -9/+3 |
| | |||||
* | Add Lora | Volpeon | 2023-02-07 | 4 | -37/+214 |
| | |||||
* | Restored LR finder | Volpeon | 2023-01-20 | 6 | -393/+82 |
| | |||||
* | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 3 | -16/+48 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 4 | -14/+19 |
| | |||||
* | Fix | Volpeon | 2023-01-17 | 1 | -4/+5 |
| | |||||
* | Fix | Volpeon | 2023-01-17 | 1 | -1/+0 |
| | |||||
* | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 1 | -9/+5 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 2 | -8/+8 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 4 | -21/+38 |
| | |||||
* | Training update | Volpeon | 2023-01-16 | 3 | -12/+15 |
| | |||||
* | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 1 | -3/+14 |
| | |||||
* | More training adjustments | Volpeon | 2023-01-16 | 3 | -8/+9 |
| | |||||
* | Handle empty validation dataset | Volpeon | 2023-01-16 | 3 | -47/+58 |
| | |||||
* | Extended Dreambooth: Train TI tokens separately | Volpeon | 2023-01-16 | 1 | -0/+1 |
| | |||||
* | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 1 | -3/+4 |
| | |||||
* | Added Dreambooth strategy | Volpeon | 2023-01-15 | 1 | -0/+183 |
| | |||||
* | Restored functional trainer | Volpeon | 2023-01-15 | 2 | -27/+83 |
| | |||||
* | Update | Volpeon | 2023-01-15 | 3 | -119/+64 |
| | |||||
* | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 2 | -44/+39 |
| | |||||
* | Added functional TI strategy | Volpeon | 2023-01-15 | 2 | -0/+282 |
| | |||||
* | Added functional trainer | Volpeon | 2023-01-15 | 1 | -1/+74 |
| | |||||
* | Update | Volpeon | 2023-01-14 | 2 | -122/+24 |
| | |||||
* | Update | Volpeon | 2023-01-14 | 1 | -10/+9 |
| | |||||
* | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 3 | -220/+130 |
| | |||||
* | TI: Prepare UNet with Accelerate as well | Volpeon | 2023-01-14 | 2 | -27/+26 |
| | |||||
* | Fix | Volpeon | 2023-01-14 | 1 | -1/+1 |
| | |||||
* | Cleanup | Volpeon | 2023-01-14 | 2 | -60/+63 |
| | |||||
* | Reverted modularization mostly | Volpeon | 2023-01-13 | 5 | -531/+70 |
| | |||||
* | More modularization | Volpeon | 2023-01-13 | 6 | -32/+541 |
| | |||||
* | Simplified step calculations | Volpeon | 2023-01-13 | 1 | -20/+22 |
| | |||||
* | Removed PromptProcessor, modularized training loop | Volpeon | 2023-01-13 | 2 | -10/+208 |
| | |||||
* | Code deduplication | Volpeon | 2023-01-13 | 1 | -0/+55 |
| | |||||
* | Update | Volpeon | 2023-01-12 | 1 | -4/+7 |
| | |||||
* | Fix | Volpeon | 2023-01-11 | 1 | -2/+2 |
| | |||||
* | TI: Use grad clipping from LoRA #104 | Volpeon | 2023-01-11 | 1 | -1/+1 |
| | |||||
* | Added arg to disable tag shuffling | Volpeon | 2023-01-10 | 1 | -10/+10 |
| | |||||
* | Fixed aspect ratio bucketing; allow passing token IDs to pipeline | Volpeon | 2023-01-08 | 1 | -6/+8 |
| | |||||
* | Improved aspect ratio bucketing | Volpeon | 2023-01-08 | 1 | -1/+1 |
| | |||||
* | Cleanup | Volpeon | 2023-01-07 | 1 | -0/+54 |
| | |||||
* | Made aspect ratio bucketing configurable | Volpeon | 2023-01-07 | 1 | -7/+2 |
| | |||||
* | Added progressive aspect ratio bucketing | Volpeon | 2023-01-07 | 1 | -2/+2 |
| |