summaryrefslogtreecommitdiffstats
path: root/training/functional.py
Commit message (Collapse)AuthorAgeFilesLines
* Fixed TI normalization orderVolpeon2023-02-211-4/+4
|
* FixVolpeon2023-02-211-6/+3
|
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-24/+29
|
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-2/+5
|
* UpdateVolpeon2023-02-181-7/+14
|
* Added Lion optimizerVolpeon2023-02-171-4/+5
|
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-171-2/+1
|
* FixVolpeon2023-02-161-4/+2
|
* Integrated WIP UniPC schedulerVolpeon2023-02-161-8/+22
|
* UpdateVolpeon2023-02-151-1/+1
|
* Made low-freq noise configurableVolpeon2023-02-141-6/+11
|
* Better noise generation during training: ↵Volpeon2023-02-131-0/+7
| | | | https://www.crosslabs.org/blog/diffusion-with-offset-noise
* UpdateVolpeon2023-02-131-1/+1
|
* Fix Lora memory usageVolpeon2023-02-071-2/+2
|
* Add LoraVolpeon2023-02-071-20/+11
|
* Restored LR finderVolpeon2023-01-201-10/+25
|
* Move Accelerator preparation into strategyVolpeon2023-01-191-14/+14
|
* UpdateVolpeon2023-01-171-4/+8
|
* UpdateVolpeon2023-01-171-5/+14
|
* Training updateVolpeon2023-01-161-3/+1
|
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-161-3/+14
|
* More training adjustmentsVolpeon2023-01-161-2/+3
|
* Handle empty validation datasetVolpeon2023-01-161-45/+56
|
* Extended Dreambooth: Train TI tokens separatelyVolpeon2023-01-161-0/+1
|
* Implemented extended Dreambooth trainingVolpeon2023-01-161-3/+4
|
* Restored functional trainerVolpeon2023-01-151-24/+78
|
* UpdateVolpeon2023-01-151-77/+23
|
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-151-34/+29
|
* Added functional TI strategyVolpeon2023-01-151-0/+118
|
* Added functional trainerVolpeon2023-01-151-1/+74
|
* UpdateVolpeon2023-01-141-10/+24
|
* UpdateVolpeon2023-01-141-10/+9
|
* WIP: Modularization ("free(): invalid pointer" my ass)Volpeon2023-01-141-0/+365