summaryrefslogtreecommitdiffstats
path: root/train_ti.py
Commit message (Collapse)AuthorAgeFilesLines
* TI: Bring back old embedding decayVolpeon2023-04-041-5/+19
|
* Improved sparse embeddingsVolpeon2023-04-031-1/+1
|
* TI: Delta learningVolpeon2023-04-031-26/+11
|
* TI: No tag dropout by defaultVolpeon2023-04-031-1/+1
|
* Bring back Lion optimizerVolpeon2023-04-031-3/+27
|
* Update dataset format: Separate prompt and keywordsVolpeon2023-04-021-1/+1
|
* RevertVolpeon2023-04-011-6/+46
|
* Combined TI with embedding and LoRAVolpeon2023-04-011-25/+5
|
* Experimental: TI via LoRAVolpeon2023-04-011-22/+2
|
* UpdateVolpeon2023-04-011-1/+0
|
* Add support for Adafactor, add TI initializer noiseVolpeon2023-04-011-2/+23
|
* UpdateVolpeon2023-03-311-1/+3
|
* UpdateVolpeon2023-03-311-0/+7
|
* FixVolpeon2023-03-311-1/+2
|
* FixVolpeon2023-03-311-1/+1
|
* Support Dadaptation d0, adjust sample freq when steps instead of epochs are usedVolpeon2023-03-311-4/+11
|
* FixVolpeon2023-03-311-1/+2
|
* FixVolpeon2023-03-281-1/+1
|
* Support num_train_steps arg againVolpeon2023-03-281-9/+21
|
* Fix TIVolpeon2023-03-271-8/+8
|
* Fix TIVolpeon2023-03-271-1/+10
|
* Fix TIVolpeon2023-03-271-1/+1
|
* Improved inverted tokensVolpeon2023-03-261-1/+15
|
* UpdateVolpeon2023-03-251-4/+10
|
* UpdateVolpeon2023-03-241-3/+3
|
* Bring back Perlin offset noiseVolpeon2023-03-231-0/+7
|
* UpdateVolpeon2023-03-231-1/+1
|
* FixVolpeon2023-03-221-4/+0
|
* Log DAdam/DAdan dVolpeon2023-03-211-2/+2
|
* Added dadaptationVolpeon2023-03-211-0/+28
|
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-2/+2
|
* Test: https://arxiv.org/pdf/2303.09556.pdfVolpeon2023-03-171-12/+26
|
* UpdateVolpeon2023-03-071-4/+4
|
* Pipeline: Perlin noise for init imageVolpeon2023-03-041-1/+1
|
* Removed offset noise from training, added init offset to pipelineVolpeon2023-03-031-1/+0
|
* Implemented different noise offsetVolpeon2023-03-031-2/+2
|
* UpdateVolpeon2023-03-011-1/+1
|
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-1/+1
|
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-11/+10
|
* UpdateVolpeon2023-02-181-2/+4
|
* Added Lion optimizerVolpeon2023-02-171-11/+27
|
* Back to xformersVolpeon2023-02-171-3/+2
|
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+4
|
* Integrated WIP UniPC schedulerVolpeon2023-02-161-1/+2
|
* UpdateVolpeon2023-02-151-0/+1
|
* UpdateVolpeon2023-02-131-8/+11
|
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-081-1/+1
|
* Fixed Lora trainingVolpeon2023-02-081-6/+6
|
* Add LoraVolpeon2023-02-071-4/+6
|
* Restored LR finderVolpeon2023-01-201-2/+19
|