Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Added Lion optimizer | Volpeon | 2023-02-17 | 1 | -11/+27 |
| | |||||
* | Back to xformers | Volpeon | 2023-02-17 | 1 | -3/+2 |
| | |||||
* | Remove xformers, switch to Pytorch Nightly | Volpeon | 2023-02-17 | 1 | -2/+4 |
| | |||||
* | Integrated WIP UniPC scheduler | Volpeon | 2023-02-16 | 1 | -1/+2 |
| | |||||
* | Update | Volpeon | 2023-02-15 | 1 | -0/+1 |
| | |||||
* | Update | Volpeon | 2023-02-13 | 1 | -8/+11 |
| | |||||
* | Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline | Volpeon | 2023-02-08 | 1 | -1/+1 |
| | |||||
* | Fixed Lora training | Volpeon | 2023-02-08 | 1 | -6/+6 |
| | |||||
* | Add Lora | Volpeon | 2023-02-07 | 1 | -4/+6 |
| | |||||
* | Restored LR finder | Volpeon | 2023-01-20 | 1 | -2/+19 |
| | |||||
* | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 1 | -3/+3 |
| | |||||
* | Smaller emb decay | Volpeon | 2023-01-17 | 1 | -1/+1 |
| | |||||
* | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 1 | -12/+4 |
| | |||||
* | Update | Volpeon | 2023-01-17 | 1 | -49/+64 |
| | |||||
* | Training update | Volpeon | 2023-01-16 | 1 | -3/+5 |
| | |||||
* | If valid set size is 0, re-use one image from train set | Volpeon | 2023-01-16 | 1 | -5/+1 |
| | |||||
* | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 1 | -107/+114 |
| | |||||
* | More training adjustments | Volpeon | 2023-01-16 | 1 | -5/+12 |
| | |||||
* | Handle empty validation dataset | Volpeon | 2023-01-16 | 1 | -6/+3 |
| | |||||
* | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 1 | -40/+22 |
| | |||||
* | Added Dreambooth strategy | Volpeon | 2023-01-15 | 1 | -23/+23 |
| | |||||
* | Restored functional trainer | Volpeon | 2023-01-15 | 1 | -61/+21 |
| | |||||
* | Update | Volpeon | 2023-01-15 | 1 | -36/+38 |
| | |||||
* | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 1 | -48/+1 |
| | |||||
* | Added functional TI strategy | Volpeon | 2023-01-15 | 1 | -78/+30 |
| | |||||
* | Added functional trainer | Volpeon | 2023-01-15 | 1 | -26/+23 |
| | |||||
* | Update | Volpeon | 2023-01-14 | 1 | -5/+5 |
| | |||||
* | Update | Volpeon | 2023-01-14 | 1 | -3/+4 |
| | |||||
* | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 1 | -59/+15 |
| | |||||
* | TI: Prepare UNet with Accelerate as well | Volpeon | 2023-01-14 | 1 | -12/+15 |
| | |||||
* | Fix | Volpeon | 2023-01-14 | 1 | -2/+2 |
| | |||||
* | Cleanup | Volpeon | 2023-01-14 | 1 | -21/+12 |
| | |||||
* | Unified training script structure | Volpeon | 2023-01-13 | 1 | -3/+6 |
| | |||||
* | Reverted modularization mostly | Volpeon | 2023-01-13 | 1 | -81/+386 |
| | |||||
* | More modularization | Volpeon | 2023-01-13 | 1 | -409/+70 |
| | |||||
* | Simplified step calculations | Volpeon | 2023-01-13 | 1 | -13/+11 |
| | |||||
* | Removed PromptProcessor, modularized training loop | Volpeon | 2023-01-13 | 1 | -215/+53 |
| | |||||
* | Added TI decay start offset | Volpeon | 2023-01-13 | 1 | -2/+8 |
| | |||||
* | Code deduplication | Volpeon | 2023-01-13 | 1 | -60/+26 |
| | |||||
* | Update | Volpeon | 2023-01-12 | 1 | -16/+14 |
| | |||||
* | Fixed TI decay | Volpeon | 2023-01-12 | 1 | -8/+3 |
| | |||||
* | Disable Adam weight decay | Volpeon | 2023-01-12 | 1 | -1/+1 |
| | |||||
* | Fix | Volpeon | 2023-01-11 | 1 | -3/+3 |
| | |||||
* | Heck | Volpeon | 2023-01-11 | 1 | -1/+1 |
| | |||||
* | TI: Use grad clipping from LoRA #104 | Volpeon | 2023-01-11 | 1 | -8/+11 |
| | |||||
* | Better defaults | Volpeon | 2023-01-10 | 1 | -4/+4 |
| | |||||
* | Fix | Volpeon | 2023-01-10 | 1 | -1/+1 |
| | |||||
* | Added arg to disable tag shuffling | Volpeon | 2023-01-10 | 1 | -6/+16 |
| | |||||
* | Enable buckets for validation, fixed vaildation repeat arg | Volpeon | 2023-01-09 | 1 | -4/+1 |
| | |||||
* | Add --valid_set_repeat | Volpeon | 2023-01-09 | 1 | -0/+22 |
| |