summaryrefslogtreecommitdiffstats
path: root/train_ti.py
Commit message (Expand)AuthorAgeFilesLines
* UpdateVolpeon2023-04-011-1/+0
* Add support for Adafactor, add TI initializer noiseVolpeon2023-04-011-2/+23
* UpdateVolpeon2023-03-311-1/+3
* UpdateVolpeon2023-03-311-0/+7
* FixVolpeon2023-03-311-1/+2
* FixVolpeon2023-03-311-1/+1
* Support Dadaptation d0, adjust sample freq when steps instead of epochs are usedVolpeon2023-03-311-4/+11
* FixVolpeon2023-03-311-1/+2
* FixVolpeon2023-03-281-1/+1
* Support num_train_steps arg againVolpeon2023-03-281-9/+21
* Fix TIVolpeon2023-03-271-8/+8
* Fix TIVolpeon2023-03-271-1/+10
* Fix TIVolpeon2023-03-271-1/+1
* Improved inverted tokensVolpeon2023-03-261-1/+15
* UpdateVolpeon2023-03-251-4/+10
* UpdateVolpeon2023-03-241-3/+3
* Bring back Perlin offset noiseVolpeon2023-03-231-0/+7
* UpdateVolpeon2023-03-231-1/+1
* FixVolpeon2023-03-221-4/+0
* Log DAdam/DAdan dVolpeon2023-03-211-2/+2
* Added dadaptationVolpeon2023-03-211-0/+28
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-2/+2
* Test: https://arxiv.org/pdf/2303.09556.pdfVolpeon2023-03-171-12/+26
* UpdateVolpeon2023-03-071-4/+4
* Pipeline: Perlin noise for init imageVolpeon2023-03-041-1/+1
* Removed offset noise from training, added init offset to pipelineVolpeon2023-03-031-1/+0
* Implemented different noise offsetVolpeon2023-03-031-2/+2
* UpdateVolpeon2023-03-011-1/+1
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-1/+1
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-11/+10
* UpdateVolpeon2023-02-181-2/+4
* Added Lion optimizerVolpeon2023-02-171-11/+27
* Back to xformersVolpeon2023-02-171-3/+2
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+4
* Integrated WIP UniPC schedulerVolpeon2023-02-161-1/+2
* UpdateVolpeon2023-02-151-0/+1
* UpdateVolpeon2023-02-131-8/+11
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-081-1/+1
* Fixed Lora trainingVolpeon2023-02-081-6/+6
* Add LoraVolpeon2023-02-071-4/+6
* Restored LR finderVolpeon2023-01-201-2/+19
* Move Accelerator preparation into strategyVolpeon2023-01-191-3/+3
* Smaller emb decayVolpeon2023-01-171-1/+1
* Make embedding decay work like Adam decayVolpeon2023-01-171-12/+4
* UpdateVolpeon2023-01-171-49/+64
* Training updateVolpeon2023-01-161-3/+5
* If valid set size is 0, re-use one image from train setVolpeon2023-01-161-5/+1
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-161-107/+114
* More training adjustmentsVolpeon2023-01-161-5/+12
* Handle empty validation datasetVolpeon2023-01-161-6/+3