Commit message (Expand) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | Fix | Volpeon | 2023-01-17 | 1 | -1/+0 | |
* | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 2 | -21/+9 | |
* | Update | Volpeon | 2023-01-17 | 4 | -9/+12 | |
* | Update | Volpeon | 2023-01-17 | 6 | -73/+104 | |
* | Training update | Volpeon | 2023-01-16 | 5 | -16/+25 | |
* | If valid set size is 0, re-use one image from train set | Volpeon | 2023-01-16 | 2 | -6/+2 | |
* | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 4 | -244/+131 | |
* | More training adjustments | Volpeon | 2023-01-16 | 6 | -43/+101 | |
* | Pad dataset if len(items) < batch_size | Volpeon | 2023-01-16 | 2 | -20/+23 | |
* | Handle empty validation dataset | Volpeon | 2023-01-16 | 6 | -76/+91 | |
* | Extended Dreambooth: Train TI tokens separately | Volpeon | 2023-01-16 | 3 | -71/+84 | |
* | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 4 | -372/+200 | |
* | Added Dreambooth strategy | Volpeon | 2023-01-15 | 2 | -23/+206 | |
* | Restored functional trainer | Volpeon | 2023-01-15 | 5 | -104/+112 | |
* | Fixed Conda env | Volpeon | 2023-01-15 | 1 | -2/+4 | |
* | Update | Volpeon | 2023-01-15 | 5 | -162/+106 | |
* | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 7 | -1470/+40 | |
* | Added functional TI strategy | Volpeon | 2023-01-15 | 3 | -78/+312 | |
* | Added functional trainer | Volpeon | 2023-01-15 | 3 | -37/+101 | |
* | Update | Volpeon | 2023-01-14 | 6 | -127/+33 | |
* | Update | Volpeon | 2023-01-14 | 4 | -15/+15 | |
* | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 11 | -286/+1541 | |
* | TI: Prepare UNet with Accelerate as well | Volpeon | 2023-01-14 | 3 | -39/+41 | |
* | Fix | Volpeon | 2023-01-14 | 4 | -6/+6 | |
* | Cleanup | Volpeon | 2023-01-14 | 7 | -131/+103 | |
* | Unified training script structure | Volpeon | 2023-01-13 | 2 | -130/+84 | |
* | Reverted modularization mostly | Volpeon | 2023-01-13 | 7 | -613/+458 | |
* | More modularization | Volpeon | 2023-01-13 | 9 | -653/+677 | |
* | Simplified step calculations | Volpeon | 2023-01-13 | 2 | -33/+33 | |
* | Removed PromptProcessor, modularized training loop | Volpeon | 2023-01-13 | 9 | -293/+334 | |
* | Added TI decay start offset | Volpeon | 2023-01-13 | 2 | -3/+9 | |
* | Code deduplication | Volpeon | 2023-01-13 | 6 | -146/+149 | |
* | Update | Volpeon | 2023-01-12 | 3 | -34/+63 | |
* | Fixed TI decay | Volpeon | 2023-01-12 | 2 | -9/+12 | |
* | Disable Adam weight decay | Volpeon | 2023-01-12 | 1 | -1/+1 | |
* | Fix | Volpeon | 2023-01-11 | 2 | -5/+5 | |
* | Heck | Volpeon | 2023-01-11 | 1 | -1/+1 | |
* | TI: Use grad clipping from LoRA #104 | Volpeon | 2023-01-11 | 4 | -12/+15 | |
* | Better defaults | Volpeon | 2023-01-10 | 2 | -7/+6 | |
* | Fix | Volpeon | 2023-01-10 | 2 | -2/+2 | |
* | Added arg to disable tag shuffling | Volpeon | 2023-01-10 | 4 | -18/+37 | |
* | Enable buckets for validation, fixed vaildation repeat arg | Volpeon | 2023-01-09 | 3 | -10/+5 | |
* | Add --valid_set_repeat | Volpeon | 2023-01-09 | 3 | -1/+37 | |
* | No cache after all | Volpeon | 2023-01-08 | 1 | -17/+7 | |
* | Cache token IDs in dataset | Volpeon | 2023-01-08 | 1 | -8/+20 | |
* | Fix | Volpeon | 2023-01-08 | 2 | -6/+6 | |
* | Improved aspect ratio bucketing | Volpeon | 2023-01-08 | 3 | -4/+61 | |
* | Fixed aspect ratio bucketing | Volpeon | 2023-01-08 | 1 | -3/+5 | |
* | Cleanup | Volpeon | 2023-01-08 | 1 | -15/+25 | |
* | Fixed aspect ratio bucketing; allow passing token IDs to pipeline | Volpeon | 2023-01-08 | 5 | -68/+102 |