index
:
textual-inversion-diff
master
textual-inversion-diff
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
path:
root
/
train_ti.py
Commit message (
Expand
)
Author
Age
Files
Lines
...
*
Log DAdam/DAdan d
Volpeon
2023-03-21
1
-2
/
+2
*
Added dadaptation
Volpeon
2023-03-21
1
-0
/
+28
*
Fixed SNR weighting, re-enabled xformers
Volpeon
2023-03-21
1
-2
/
+2
*
Test: https://arxiv.org/pdf/2303.09556.pdf
Volpeon
2023-03-17
1
-12
/
+26
*
Update
Volpeon
2023-03-07
1
-4
/
+4
*
Pipeline: Perlin noise for init image
Volpeon
2023-03-04
1
-1
/
+1
*
Removed offset noise from training, added init offset to pipeline
Volpeon
2023-03-03
1
-1
/
+0
*
Implemented different noise offset
Volpeon
2023-03-03
1
-2
/
+2
*
Update
Volpeon
2023-03-01
1
-1
/
+1
*
Don't rely on Accelerate for gradient accumulation
Volpeon
2023-02-21
1
-1
/
+1
*
Embedding normalization: Ignore tensors with grad = 0
Volpeon
2023-02-21
1
-11
/
+10
*
Update
Volpeon
2023-02-18
1
-2
/
+4
*
Added Lion optimizer
Volpeon
2023-02-17
1
-11
/
+27
*
Back to xformers
Volpeon
2023-02-17
1
-3
/
+2
*
Remove xformers, switch to Pytorch Nightly
Volpeon
2023-02-17
1
-2
/
+4
*
Integrated WIP UniPC scheduler
Volpeon
2023-02-16
1
-1
/
+2
*
Update
Volpeon
2023-02-15
1
-0
/
+1
*
Update
Volpeon
2023-02-13
1
-8
/
+11
*
Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline
Volpeon
2023-02-08
1
-1
/
+1
*
Fixed Lora training
Volpeon
2023-02-08
1
-6
/
+6
*
Add Lora
Volpeon
2023-02-07
1
-4
/
+6
*
Restored LR finder
Volpeon
2023-01-20
1
-2
/
+19
*
Move Accelerator preparation into strategy
Volpeon
2023-01-19
1
-3
/
+3
*
Smaller emb decay
Volpeon
2023-01-17
1
-1
/
+1
*
Make embedding decay work like Adam decay
Volpeon
2023-01-17
1
-12
/
+4
*
Update
Volpeon
2023-01-17
1
-49
/
+64
*
Training update
Volpeon
2023-01-16
1
-3
/
+5
*
If valid set size is 0, re-use one image from train set
Volpeon
2023-01-16
1
-5
/
+1
*
Moved multi-TI code from Dreambooth to TI script
Volpeon
2023-01-16
1
-107
/
+114
*
More training adjustments
Volpeon
2023-01-16
1
-5
/
+12
*
Handle empty validation dataset
Volpeon
2023-01-16
1
-6
/
+3
*
Implemented extended Dreambooth training
Volpeon
2023-01-16
1
-40
/
+22
*
Added Dreambooth strategy
Volpeon
2023-01-15
1
-23
/
+23
*
Restored functional trainer
Volpeon
2023-01-15
1
-61
/
+21
*
Update
Volpeon
2023-01-15
1
-36
/
+38
*
Removed unused code, put training callbacks in dataclass
Volpeon
2023-01-15
1
-48
/
+1
*
Added functional TI strategy
Volpeon
2023-01-15
1
-78
/
+30
*
Added functional trainer
Volpeon
2023-01-15
1
-26
/
+23
*
Update
Volpeon
2023-01-14
1
-5
/
+5
*
Update
Volpeon
2023-01-14
1
-3
/
+4
*
WIP: Modularization ("free(): invalid pointer" my ass)
Volpeon
2023-01-14
1
-59
/
+15
*
TI: Prepare UNet with Accelerate as well
Volpeon
2023-01-14
1
-12
/
+15
*
Fix
Volpeon
2023-01-14
1
-2
/
+2
*
Cleanup
Volpeon
2023-01-14
1
-21
/
+12
*
Unified training script structure
Volpeon
2023-01-13
1
-3
/
+6
*
Reverted modularization mostly
Volpeon
2023-01-13
1
-81
/
+386
*
More modularization
Volpeon
2023-01-13
1
-409
/
+70
*
Simplified step calculations
Volpeon
2023-01-13
1
-13
/
+11
*
Removed PromptProcessor, modularized training loop
Volpeon
2023-01-13
1
-215
/
+53
*
Added TI decay start offset
Volpeon
2023-01-13
1
-2
/
+8
[prev]
[next]