summaryrefslogtreecommitdiffstats
path: root/training
Commit message (Expand)AuthorAgeFilesLines
...
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-212-30/+29
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-212-6/+16
* UpdateVolpeon2023-02-181-7/+14
* Added Lion optimizerVolpeon2023-02-171-4/+5
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-174-9/+8
* FixVolpeon2023-02-161-4/+2
* Integrated WIP UniPC schedulerVolpeon2023-02-161-8/+22
* UpdateVolpeon2023-02-151-1/+1
* Made low-freq noise configurableVolpeon2023-02-141-6/+11
* Better noise generation during training: https://www.crosslabs.org/blog/diffu...Volpeon2023-02-131-0/+7
* UpdateVolpeon2023-02-133-3/+3
* Fixed Lora trainingVolpeon2023-02-081-18/+5
* Fix Lora memory usageVolpeon2023-02-074-9/+3
* Add LoraVolpeon2023-02-074-37/+214
* Restored LR finderVolpeon2023-01-206-393/+82
* Move Accelerator preparation into strategyVolpeon2023-01-193-16/+48
* UpdateVolpeon2023-01-174-14/+19
* FixVolpeon2023-01-171-4/+5
* FixVolpeon2023-01-171-1/+0
* Make embedding decay work like Adam decayVolpeon2023-01-171-9/+5
* UpdateVolpeon2023-01-172-8/+8
* UpdateVolpeon2023-01-174-21/+38
* Training updateVolpeon2023-01-163-12/+15
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-161-3/+14
* More training adjustmentsVolpeon2023-01-163-8/+9
* Handle empty validation datasetVolpeon2023-01-163-47/+58
* Extended Dreambooth: Train TI tokens separatelyVolpeon2023-01-161-0/+1
* Implemented extended Dreambooth trainingVolpeon2023-01-161-3/+4
* Added Dreambooth strategyVolpeon2023-01-151-0/+183
* Restored functional trainerVolpeon2023-01-152-27/+83
* UpdateVolpeon2023-01-153-119/+64
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-152-44/+39
* Added functional TI strategyVolpeon2023-01-152-0/+282
* Added functional trainerVolpeon2023-01-151-1/+74
* UpdateVolpeon2023-01-142-122/+24
* UpdateVolpeon2023-01-141-10/+9
* WIP: Modularization ("free(): invalid pointer" my ass)Volpeon2023-01-143-220/+130
* TI: Prepare UNet with Accelerate as wellVolpeon2023-01-142-27/+26
* FixVolpeon2023-01-141-1/+1
* CleanupVolpeon2023-01-142-60/+63
* Reverted modularization mostlyVolpeon2023-01-135-531/+70
* More modularizationVolpeon2023-01-136-32/+541
* Simplified step calculationsVolpeon2023-01-131-20/+22
* Removed PromptProcessor, modularized training loopVolpeon2023-01-132-10/+208
* Code deduplicationVolpeon2023-01-131-0/+55
* UpdateVolpeon2023-01-121-4/+7
* FixVolpeon2023-01-111-2/+2
* TI: Use grad clipping from LoRA #104Volpeon2023-01-111-1/+1
* Added arg to disable tag shufflingVolpeon2023-01-101-10/+10
* Fixed aspect ratio bucketing; allow passing token IDs to pipelineVolpeon2023-01-081-6/+8