| Commit message (Expand) | Author | Age | Files | Lines | ||
|---|---|---|---|---|---|---|
| ... | ||||||
| * | Update | Volpeon | 2023-02-15 | 3 | -1/+3 | |
| * | Improved batch padding | Volpeon | 2023-02-15 | 1 | -29/+26 | |
| * | Better batch filling | Volpeon | 2023-02-15 | 1 | -3/+6 | |
| * | Better batch filling behavior | Volpeon | 2023-02-15 | 1 | -3/+7 | |
| * | Dataset: Repeat data to fill batch to batch_size | Volpeon | 2023-02-15 | 1 | -0/+3 | |
| * | Made low-freq noise configurable | Volpeon | 2023-02-14 | 1 | -6/+11 | |
| * | Better noise generation during training: https://www.crosslabs.org/blog/diffu... | Volpeon | 2023-02-13 | 1 | -0/+7 | |
| * | Update | Volpeon | 2023-02-13 | 10 | -65/+73 | |
| * | Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline | Volpeon | 2023-02-08 | 4 | -17/+164 | |
| * | Fixed Lora training | Volpeon | 2023-02-08 | 4 | -37/+35 | |
| * | Fix Lora memory usage | Volpeon | 2023-02-07 | 5 | -11/+5 | |
| * | Add Lora | Volpeon | 2023-02-07 | 10 | -93/+819 | |
| * | Restored LR finder | Volpeon | 2023-01-20 | 9 | -392/+106 | |
| * | Move Accelerator preparation into strategy | Volpeon | 2023-01-19 | 4 | -19/+51 | |
| * | Update | Volpeon | 2023-01-17 | 5 | -22/+25 | |
| * | Optimized embedding normalization | Volpeon | 2023-01-17 | 1 | -5/+2 | |
| * | Smaller emb decay | Volpeon | 2023-01-17 | 1 | -1/+1 | |
| * | Fix | Volpeon | 2023-01-17 | 1 | -4/+5 | |
| * | Fix | Volpeon | 2023-01-17 | 1 | -1/+0 | |
| * | Make embedding decay work like Adam decay | Volpeon | 2023-01-17 | 2 | -21/+9 | |
| * | Update | Volpeon | 2023-01-17 | 4 | -9/+12 | |
| * | Update | Volpeon | 2023-01-17 | 6 | -71/+102 | |
| * | Training update | Volpeon | 2023-01-16 | 5 | -16/+25 | |
| * | If valid set size is 0, re-use one image from train set | Volpeon | 2023-01-16 | 2 | -6/+2 | |
| * | Moved multi-TI code from Dreambooth to TI script | Volpeon | 2023-01-16 | 4 | -243/+130 | |
| * | More training adjustments | Volpeon | 2023-01-16 | 6 | -43/+101 | |
| * | Pad dataset if len(items) < batch_size | Volpeon | 2023-01-16 | 2 | -20/+23 | |
| * | Handle empty validation dataset | Volpeon | 2023-01-16 | 6 | -72/+87 | |
| * | Extended Dreambooth: Train TI tokens separately | Volpeon | 2023-01-16 | 3 | -71/+84 | |
| * | Implemented extended Dreambooth training | Volpeon | 2023-01-16 | 4 | -367/+195 | |
| * | Added Dreambooth strategy | Volpeon | 2023-01-15 | 2 | -23/+206 | |
| * | Restored functional trainer | Volpeon | 2023-01-15 | 5 | -104/+112 | |
| * | Fixed Conda env | Volpeon | 2023-01-15 | 1 | -2/+4 | |
| * | Update | Volpeon | 2023-01-15 | 5 | -162/+106 | |
| * | Removed unused code, put training callbacks in dataclass | Volpeon | 2023-01-15 | 7 | -1470/+40 | |
| * | Added functional TI strategy | Volpeon | 2023-01-15 | 3 | -78/+312 | |
| * | Added functional trainer | Volpeon | 2023-01-15 | 3 | -37/+101 | |
| * | Update | Volpeon | 2023-01-14 | 6 | -127/+33 | |
| * | Update | Volpeon | 2023-01-14 | 4 | -15/+15 | |
| * | WIP: Modularization ("free(): invalid pointer" my ass) | Volpeon | 2023-01-14 | 11 | -279/+1534 | |
| * | TI: Prepare UNet with Accelerate as well | Volpeon | 2023-01-14 | 3 | -39/+41 | |
| * | Fix | Volpeon | 2023-01-14 | 4 | -6/+6 | |
| * | Cleanup | Volpeon | 2023-01-14 | 7 | -123/+95 | |
| * | Unified training script structure | Volpeon | 2023-01-13 | 2 | -130/+84 | |
| * | Reverted modularization mostly | Volpeon | 2023-01-13 | 7 | -611/+456 | |
| * | More modularization | Volpeon | 2023-01-13 | 9 | -651/+675 | |
| * | Simplified step calculations | Volpeon | 2023-01-13 | 2 | -33/+33 | |
| * | Removed PromptProcessor, modularized training loop | Volpeon | 2023-01-13 | 9 | -293/+334 | |
| * | Added TI decay start offset | Volpeon | 2023-01-13 | 2 | -3/+9 | |
| * | Code deduplication | Volpeon | 2023-01-13 | 6 | -146/+149 | |
