index
:
textual-inversion-diff
master
textual-inversion-diff
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
Commit message (
Expand
)
Author
Age
Files
Lines
*
Pipeline: Perlin noise for init image
Volpeon
2023-03-04
7
-18
/
+70
*
Pipeline: Improved initial image generation
Volpeon
2023-03-04
1
-23
/
+26
*
Changed init noise algorithm
Volpeon
2023-03-03
1
-3
/
+11
*
Removed offset noise from training, added init offset to pipeline
Volpeon
2023-03-03
4
-68
/
+41
*
Implemented different noise offset
Volpeon
2023-03-03
6
-28
/
+16
*
Low freq noise with randomized strength
Volpeon
2023-03-03
1
-1
/
+8
*
Better low freq noise
Volpeon
2023-03-02
1
-1
/
+1
*
Changed low freq noise
Volpeon
2023-03-01
1
-23
/
+10
*
Update
Volpeon
2023-03-01
10
-535
/
+39
*
Fixed TI normalization order
Volpeon
2023-02-21
3
-15
/
+19
*
Fix
Volpeon
2023-02-21
1
-6
/
+3
*
Don't rely on Accelerate for gradient accumulation
Volpeon
2023-02-21
5
-33
/
+32
*
Embedding normalization: Ignore tensors with grad = 0
Volpeon
2023-02-21
7
-45
/
+31
*
Update
Volpeon
2023-02-18
5
-15
/
+30
*
Added Lion optimizer
Volpeon
2023-02-17
7
-39
/
+592
*
Inference script: Better scheduler config
Volpeon
2023-02-17
1
-19
/
+37
*
Back to xformers
Volpeon
2023-02-17
5
-12
/
+14
*
Remove xformers, switch to Pytorch Nightly
Volpeon
2023-02-17
10
-644
/
+27
*
Fix
Volpeon
2023-02-16
1
-4
/
+2
*
Integrated WIP UniPC scheduler
Volpeon
2023-02-16
6
-14
/
+655
*
Update
Volpeon
2023-02-15
3
-1
/
+3
*
Improved batch padding
Volpeon
2023-02-15
1
-29
/
+26
*
Better batch filling
Volpeon
2023-02-15
1
-3
/
+6
*
Better batch filling behavior
Volpeon
2023-02-15
1
-3
/
+7
*
Dataset: Repeat data to fill batch to batch_size
Volpeon
2023-02-15
1
-0
/
+3
*
Made low-freq noise configurable
Volpeon
2023-02-14
1
-6
/
+11
*
Better noise generation during training: https://www.crosslabs.org/blog/diffu...
Volpeon
2023-02-13
1
-0
/
+7
*
Update
Volpeon
2023-02-13
10
-65
/
+73
*
Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline
Volpeon
2023-02-08
4
-17
/
+164
*
Fixed Lora training
Volpeon
2023-02-08
4
-37
/
+35
*
Fix Lora memory usage
Volpeon
2023-02-07
5
-11
/
+5
*
Add Lora
Volpeon
2023-02-07
10
-93
/
+819
*
Restored LR finder
Volpeon
2023-01-20
9
-397
/
+111
*
Move Accelerator preparation into strategy
Volpeon
2023-01-19
4
-19
/
+51
*
Update
Volpeon
2023-01-17
5
-22
/
+25
*
Optimized embedding normalization
Volpeon
2023-01-17
1
-5
/
+2
*
Smaller emb decay
Volpeon
2023-01-17
1
-1
/
+1
*
Fix
Volpeon
2023-01-17
1
-4
/
+5
*
Fix
Volpeon
2023-01-17
1
-1
/
+0
*
Make embedding decay work like Adam decay
Volpeon
2023-01-17
2
-21
/
+9
*
Update
Volpeon
2023-01-17
4
-9
/
+12
*
Update
Volpeon
2023-01-17
6
-73
/
+104
*
Training update
Volpeon
2023-01-16
5
-16
/
+25
*
If valid set size is 0, re-use one image from train set
Volpeon
2023-01-16
2
-6
/
+2
*
Moved multi-TI code from Dreambooth to TI script
Volpeon
2023-01-16
4
-244
/
+131
*
More training adjustments
Volpeon
2023-01-16
6
-43
/
+101
*
Pad dataset if len(items) < batch_size
Volpeon
2023-01-16
2
-20
/
+23
*
Handle empty validation dataset
Volpeon
2023-01-16
6
-76
/
+91
*
Extended Dreambooth: Train TI tokens separately
Volpeon
2023-01-16
3
-71
/
+84
*
Implemented extended Dreambooth training
Volpeon
2023-01-16
4
-372
/
+200
[next]