summaryrefslogtreecommitdiffstats
path: root/train_dreambooth.py
Commit message (Expand)AuthorAgeFilesLines
* UpdateVolpeon2023-04-091-0/+1
* UpdateVolpeon2023-04-081-4/+1
* Add color jitterVolpeon2023-04-051-3/+12
* Fix choice argsVolpeon2023-04-041-8/+9
* Bring back Lion optimizerVolpeon2023-04-031-3/+27
* UpdateVolpeon2023-04-011-1/+0
* Add support for Adafactor, add TI initializer noiseVolpeon2023-04-011-1/+15
* UpdateVolpeon2023-03-311-1/+3
* UpdateVolpeon2023-03-311-0/+7
* FixVolpeon2023-03-311-2/+2
* Support Dadaptation d0, adjust sample freq when steps instead of epochs are usedVolpeon2023-03-311-4/+11
* FixVolpeon2023-03-311-1/+2
* FixVolpeon2023-03-281-1/+1
* Support num_train_steps arg againVolpeon2023-03-281-6/+11
* Improved inverted tokensVolpeon2023-03-261-0/+1
* UpdateVolpeon2023-03-251-4/+10
* UpdateVolpeon2023-03-241-0/+7
* UpdateVolpeon2023-03-231-4/+7
* Log DAdam/DAdan dVolpeon2023-03-211-2/+2
* Added dadaptationVolpeon2023-03-211-0/+28
* Pipeline: Perlin noise for init imageVolpeon2023-03-041-1/+1
* Removed offset noise from training, added init offset to pipelineVolpeon2023-03-031-1/+0
* Implemented different noise offsetVolpeon2023-03-031-2/+2
* UpdateVolpeon2023-03-011-3/+3
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-1/+1
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-9/+2
* UpdateVolpeon2023-02-181-3/+5
* Added Lion optimizerVolpeon2023-02-171-11/+27
* Back to xformersVolpeon2023-02-171-2/+2
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+2
* Integrated WIP UniPC schedulerVolpeon2023-02-161-1/+2
* UpdateVolpeon2023-02-131-5/+5
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-081-1/+1
* Fixed Lora trainingVolpeon2023-02-081-6/+6
* Add LoraVolpeon2023-02-071-45/+1
* Restored LR finderVolpeon2023-01-201-1/+9
* UpdateVolpeon2023-01-171-8/+6
* UpdateVolpeon2023-01-171-3/+2
* Training updateVolpeon2023-01-161-1/+5
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-161-133/+2
* More training adjustmentsVolpeon2023-01-161-12/+59
* Pad dataset if len(items) < batch_sizeVolpeon2023-01-161-20/+20
* Handle empty validation datasetVolpeon2023-01-161-3/+3
* Extended Dreambooth: Train TI tokens separatelyVolpeon2023-01-161-71/+76
* Implemented extended Dreambooth trainingVolpeon2023-01-161-329/+155
* WIP: Modularization ("free(): invalid pointer" my ass)Volpeon2023-01-141-2/+1
* FixVolpeon2023-01-141-2/+2
* CleanupVolpeon2023-01-141-21/+12
* Unified training script structureVolpeon2023-01-131-127/+78
* Reverted modularization mostlyVolpeon2023-01-131-1/+2