summaryrefslogtreecommitdiffstats
path: root/train_ti.py
Commit message (Collapse)AuthorAgeFilesLines
...
* Added dadaptationVolpeon2023-03-211-0/+28
|
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-2/+2
|
* Test: https://arxiv.org/pdf/2303.09556.pdfVolpeon2023-03-171-12/+26
|
* UpdateVolpeon2023-03-071-4/+4
|
* Pipeline: Perlin noise for init imageVolpeon2023-03-041-1/+1
|
* Removed offset noise from training, added init offset to pipelineVolpeon2023-03-031-1/+0
|
* Implemented different noise offsetVolpeon2023-03-031-2/+2
|
* UpdateVolpeon2023-03-011-1/+1
|
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-1/+1
|
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-11/+10
|
* UpdateVolpeon2023-02-181-2/+4
|
* Added Lion optimizerVolpeon2023-02-171-11/+27
|
* Back to xformersVolpeon2023-02-171-3/+2
|
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+4
|
* Integrated WIP UniPC schedulerVolpeon2023-02-161-1/+2
|
* UpdateVolpeon2023-02-151-0/+1
|
* UpdateVolpeon2023-02-131-8/+11
|
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-081-1/+1
|
* Fixed Lora trainingVolpeon2023-02-081-6/+6
|
* Add LoraVolpeon2023-02-071-4/+6
|
* Restored LR finderVolpeon2023-01-201-2/+19
|
* Move Accelerator preparation into strategyVolpeon2023-01-191-3/+3
|
* Smaller emb decayVolpeon2023-01-171-1/+1
|
* Make embedding decay work like Adam decayVolpeon2023-01-171-12/+4
|
* UpdateVolpeon2023-01-171-49/+64
|
* Training updateVolpeon2023-01-161-3/+5
|
* If valid set size is 0, re-use one image from train setVolpeon2023-01-161-5/+1
|
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-161-107/+114
|
* More training adjustmentsVolpeon2023-01-161-5/+12
|
* Handle empty validation datasetVolpeon2023-01-161-6/+3
|
* Implemented extended Dreambooth trainingVolpeon2023-01-161-40/+22
|
* Added Dreambooth strategyVolpeon2023-01-151-23/+23
|
* Restored functional trainerVolpeon2023-01-151-61/+21
|
* UpdateVolpeon2023-01-151-36/+38
|
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-151-48/+1
|
* Added functional TI strategyVolpeon2023-01-151-78/+30
|
* Added functional trainerVolpeon2023-01-151-26/+23
|
* UpdateVolpeon2023-01-141-5/+5
|
* UpdateVolpeon2023-01-141-3/+4
|
* WIP: Modularization ("free(): invalid pointer" my ass)Volpeon2023-01-141-59/+15
|
* TI: Prepare UNet with Accelerate as wellVolpeon2023-01-141-12/+15
|
* FixVolpeon2023-01-141-2/+2
|
* CleanupVolpeon2023-01-141-21/+12
|
* Unified training script structureVolpeon2023-01-131-3/+6
|
* Reverted modularization mostlyVolpeon2023-01-131-81/+386
|
* More modularizationVolpeon2023-01-131-409/+70
|
* Simplified step calculationsVolpeon2023-01-131-13/+11
|
* Removed PromptProcessor, modularized training loopVolpeon2023-01-131-215/+53
|
* Added TI decay start offsetVolpeon2023-01-131-2/+8
|
* Code deduplicationVolpeon2023-01-131-60/+26
|