summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* Implemented different noise offsetVolpeon2023-03-036-28/+16
|
* Low freq noise with randomized strengthVolpeon2023-03-031-1/+8
|
* Better low freq noiseVolpeon2023-03-021-1/+1
|
* Changed low freq noiseVolpeon2023-03-011-23/+10
|
* UpdateVolpeon2023-03-0110-535/+39
|
* Fixed TI normalization orderVolpeon2023-02-213-15/+19
|
* FixVolpeon2023-02-211-6/+3
|
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-215-33/+32
|
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-217-45/+31
|
* UpdateVolpeon2023-02-185-15/+30
|
* Added Lion optimizerVolpeon2023-02-177-39/+592
|
* Inference script: Better scheduler configVolpeon2023-02-171-19/+37
|
* Back to xformersVolpeon2023-02-175-12/+14
|
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-1710-644/+27
|
* FixVolpeon2023-02-161-4/+2
|
* Integrated WIP UniPC schedulerVolpeon2023-02-166-14/+655
|
* UpdateVolpeon2023-02-153-1/+3
|
* Improved batch paddingVolpeon2023-02-151-29/+26
|
* Better batch fillingVolpeon2023-02-151-3/+6
|
* Better batch filling behaviorVolpeon2023-02-151-3/+7
|
* Dataset: Repeat data to fill batch to batch_sizeVolpeon2023-02-151-0/+3
|
* Made low-freq noise configurableVolpeon2023-02-141-6/+11
|
* Better noise generation during training: ↵Volpeon2023-02-131-0/+7
| | | | https://www.crosslabs.org/blog/diffusion-with-offset-noise
* UpdateVolpeon2023-02-1310-65/+73
|
* Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipelineVolpeon2023-02-084-17/+164
|
* Fixed Lora trainingVolpeon2023-02-084-37/+35
|
* Fix Lora memory usageVolpeon2023-02-075-11/+5
|
* Add LoraVolpeon2023-02-0710-93/+819
|
* Restored LR finderVolpeon2023-01-209-397/+111
|
* Move Accelerator preparation into strategyVolpeon2023-01-194-19/+51
|
* UpdateVolpeon2023-01-175-22/+25
|
* Optimized embedding normalizationVolpeon2023-01-171-5/+2
|
* Smaller emb decayVolpeon2023-01-171-1/+1
|
* FixVolpeon2023-01-171-4/+5
|
* FixVolpeon2023-01-171-1/+0
|
* Make embedding decay work like Adam decayVolpeon2023-01-172-21/+9
|
* UpdateVolpeon2023-01-174-9/+12
|
* UpdateVolpeon2023-01-176-73/+104
|
* Training updateVolpeon2023-01-165-16/+25
|
* If valid set size is 0, re-use one image from train setVolpeon2023-01-162-6/+2
|
* Moved multi-TI code from Dreambooth to TI scriptVolpeon2023-01-164-244/+131
|
* More training adjustmentsVolpeon2023-01-166-43/+101
|
* Pad dataset if len(items) < batch_sizeVolpeon2023-01-162-20/+23
|
* Handle empty validation datasetVolpeon2023-01-166-76/+91
|
* Extended Dreambooth: Train TI tokens separatelyVolpeon2023-01-163-71/+84
|
* Implemented extended Dreambooth trainingVolpeon2023-01-164-372/+200
|
* Added Dreambooth strategyVolpeon2023-01-152-23/+206
|
* Restored functional trainerVolpeon2023-01-155-104/+112
|
* Fixed Conda envVolpeon2023-01-151-2/+4
|
* UpdateVolpeon2023-01-155-162/+106
|