summaryrefslogtreecommitdiffstats
path: root/train_lora.py
Commit message (Expand)AuthorAgeFilesLines
...
* FixVolpeon2023-03-311-2/+2
* Support Dadaptation d0, adjust sample freq when steps instead of epochs are usedVolpeon2023-03-311-4/+11
* FixVolpeon2023-03-311-1/+2
* FixVolpeon2023-03-281-1/+1
* Support num_train_steps arg againVolpeon2023-03-281-6/+11
* Improved inverted tokensVolpeon2023-03-261-0/+1
* UpdateVolpeon2023-03-251-5/+11
* UpdateVolpeon2023-03-241-0/+7
* Refactoring, fixed Lora trainingVolpeon2023-03-241-1/+72
* UpdateVolpeon2023-03-231-3/+0
* Log DAdam/DAdan dVolpeon2023-03-211-2/+2
* Added dadaptationVolpeon2023-03-211-0/+28
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-27/+9
* UpdateVolpeon2023-03-071-7/+16
* Pipeline: Perlin noise for init imageVolpeon2023-03-041-1/+1
* Implemented different noise offsetVolpeon2023-03-031-1/+1
* UpdateVolpeon2023-03-011-1/+1
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-1/+1
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-9/+2
* UpdateVolpeon2023-02-181-3/+5
* Added Lion optimizerVolpeon2023-02-171-11/+27
* Back to xformersVolpeon2023-02-171-2/+2
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+2
* Integrated WIP UniPC schedulerVolpeon2023-02-161-1/+2
* UpdateVolpeon2023-02-131-5/+5
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-081-8/+0
* Fixed Lora trainingVolpeon2023-02-081-7/+18
* Fix Lora memory usageVolpeon2023-02-071-2/+2
* Add LoraVolpeon2023-02-071-0/+566
* Various cleanupsVolpeon2023-01-051-946/+0
* Training script improvementsVolpeon2022-12-301-1/+1
* Integrated updates from diffusersVolpeon2022-12-281-6/+4
* Set default dimensions to 768; add config inheritanceVolpeon2022-12-261-4/+3
* Some LoRA fixes (still broken)Volpeon2022-12-211-125/+34
* Fix trainingVolpeon2022-12-201-0/+1040