Commit message (Expand) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline | Volpeon | 2023-02-08 | 4 | -17/+164 |
* | Fixed Lora training | Volpeon | 2023-02-08 | 4 | -37/+35 |
* | Fix Lora memory usage | Volpeon | 2023-02-07 | 5 | -11/+5 |
* | Add Lora | Volpeon | 2023-02-07 | 10 | -93/+819 |
* | Restored LR finder | Volpeon | 2023-01-20 | 9 | -397/+111 |
* | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 4 | -19/+51 |
* | Update | Volpeon | 2023-01-17 | 5 | -22/+25 |
* | Optimized embedding normalization | Volpeon | 2023-01-17 | 1 | -5/+2 |
* | Smaller emb decay | Volpeon | 2023-01-17 | 1 | -1/+1 |
* | Fix | Volpeon | 2023-01-17 | 1 | -4/+5 |
* | Fix | Volpeon | 2023-01-17 | 1 | -1/+0 |
* | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 2 | -21/+9 |
* | Update | Volpeon | 2023-01-17 | 4 | -9/+12 |
* | Update | Volpeon | 2023-01-17 | 6 | -73/+104 |
* | Training update | Volpeon | 2023-01-16 | 5 | -16/+25 |
* | If valid set size is 0, re-use one image from train set | Volpeon | 2023-01-16 | 2 | -6/+2 |
* | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 4 | -244/+131 |
* | More training adjustments | Volpeon | 2023-01-16 | 6 | -43/+101 |
* | Pad dataset if len(items) < batch_size | Volpeon | 2023-01-16 | 2 | -20/+23 |
* | Handle empty validation dataset | Volpeon | 2023-01-16 | 6 | -76/+91 |
* | Extended Dreambooth: Train TI tokens separately | Volpeon | 2023-01-16 | 3 | -71/+84 |
* | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 4 | -372/+200 |
* | Added Dreambooth strategy | Volpeon | 2023-01-15 | 2 | -23/+206 |
* | Restored functional trainer | Volpeon | 2023-01-15 | 5 | -104/+112 |
* | Fixed Conda env | Volpeon | 2023-01-15 | 1 | -2/+4 |
* | Update | Volpeon | 2023-01-15 | 5 | -162/+106 |
* | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 7 | -1470/+40 |
* | Added functional TI strategy | Volpeon | 2023-01-15 | 3 | -78/+312 |
* | Added functional trainer | Volpeon | 2023-01-15 | 3 | -37/+101 |
* | Update | Volpeon | 2023-01-14 | 6 | -127/+33 |
* | Update | Volpeon | 2023-01-14 | 4 | -15/+15 |
* | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 11 | -286/+1541 |
* | TI: Prepare UNet with Accelerate as well | Volpeon | 2023-01-14 | 3 | -39/+41 |
* | Fix | Volpeon | 2023-01-14 | 4 | -6/+6 |
* | Cleanup | Volpeon | 2023-01-14 | 7 | -131/+103 |
* | Unified training script structure | Volpeon | 2023-01-13 | 2 | -130/+84 |
* | Reverted modularization mostly | Volpeon | 2023-01-13 | 7 | -613/+458 |
* | More modularization | Volpeon | 2023-01-13 | 9 | -653/+677 |
* | Simplified step calculations | Volpeon | 2023-01-13 | 2 | -33/+33 |
* | Removed PromptProcessor, modularized training loop | Volpeon | 2023-01-13 | 9 | -293/+334 |
* | Added TI decay start offset | Volpeon | 2023-01-13 | 2 | -3/+9 |
* | Code deduplication | Volpeon | 2023-01-13 | 6 | -146/+149 |
* | Update | Volpeon | 2023-01-12 | 3 | -34/+63 |
* | Fixed TI decay | Volpeon | 2023-01-12 | 2 | -9/+12 |
* | Disable Adam weight decay | Volpeon | 2023-01-12 | 1 | -1/+1 |
* | Fix | Volpeon | 2023-01-11 | 2 | -5/+5 |
* | Heck | Volpeon | 2023-01-11 | 1 | -1/+1 |
* | TI: Use grad clipping from LoRA #104 | Volpeon | 2023-01-11 | 4 | -12/+15 |
* | Better defaults | Volpeon | 2023-01-10 | 2 | -7/+6 |
* | Fix | Volpeon | 2023-01-10 | 2 | -2/+2 |