summaryrefslogtreecommitdiffstats
Commit message (Expand)AuthorAgeFilesLines
...
* FixVolpeon2023-01-171-1/+0
* Make embedding decay work like Adam decayVolpeon2023-01-172-21/+9
* UpdateVolpeon2023-01-174-9/+12
* UpdateVolpeon2023-01-176-73/+104
* Training updateVolpeon2023-01-165-16/+25
* If valid set size is 0, re-use one image from train setVolpeon2023-01-162-6/+2
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-164-244/+131
* More training adjustmentsVolpeon2023-01-166-43/+101
* Pad dataset if len(items) < batch_sizeVolpeon2023-01-162-20/+23
* Handle empty validation datasetVolpeon2023-01-166-76/+91
* Extended Dreambooth: Train TI tokens separatelyVolpeon2023-01-163-71/+84
* Implemented extended Dreambooth trainingVolpeon2023-01-164-372/+200
* Added Dreambooth strategyVolpeon2023-01-152-23/+206
* Restored functional trainerVolpeon2023-01-155-104/+112
* Fixed Conda envVolpeon2023-01-151-2/+4
* UpdateVolpeon2023-01-155-162/+106
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-157-1470/+40
* Added functional TI strategyVolpeon2023-01-153-78/+312
* Added functional trainerVolpeon2023-01-153-37/+101
* UpdateVolpeon2023-01-146-127/+33
* UpdateVolpeon2023-01-144-15/+15
* WIP: Modularization ("free(): invalid pointer" my ass)Volpeon2023-01-1411-286/+1541
* TI: Prepare UNet with Accelerate as wellVolpeon2023-01-143-39/+41
* FixVolpeon2023-01-144-6/+6
* CleanupVolpeon2023-01-147-131/+103
* Unified training script structureVolpeon2023-01-132-130/+84
* Reverted modularization mostlyVolpeon2023-01-137-613/+458
* More modularizationVolpeon2023-01-139-653/+677
* Simplified step calculationsVolpeon2023-01-132-33/+33
* Removed PromptProcessor, modularized training loopVolpeon2023-01-139-293/+334
* Added TI decay start offsetVolpeon2023-01-132-3/+9
* Code deduplicationVolpeon2023-01-136-146/+149
* UpdateVolpeon2023-01-123-34/+63
* Fixed TI decayVolpeon2023-01-122-9/+12
* Disable Adam weight decayVolpeon2023-01-121-1/+1
* FixVolpeon2023-01-112-5/+5
* HeckVolpeon2023-01-111-1/+1
* TI: Use grad clipping from LoRA #104Volpeon2023-01-114-12/+15
* Better defaultsVolpeon2023-01-102-7/+6
* FixVolpeon2023-01-102-2/+2
* Added arg to disable tag shufflingVolpeon2023-01-104-18/+37
* Enable buckets for validation, fixed vaildation repeat argVolpeon2023-01-093-10/+5
* Add --valid_set_repeatVolpeon2023-01-093-1/+37
* No cache after allVolpeon2023-01-081-17/+7
* Cache token IDs in datasetVolpeon2023-01-081-8/+20
* FixVolpeon2023-01-082-6/+6
* Improved aspect ratio bucketingVolpeon2023-01-083-4/+61
* Fixed aspect ratio bucketingVolpeon2023-01-081-3/+5
* CleanupVolpeon2023-01-081-15/+25
* Fixed aspect ratio bucketing; allow passing token IDs to pipelineVolpeon2023-01-085-68/+102