index
:
textual-inversion-diff
master
textual-inversion-diff
summary
refs
log
tree
commit
diff
stats
log msg
author
committer
range
path:
root
/
train_dreambooth.py
Commit message (
Expand
)
Author
Age
Files
Lines
*
Update
Volpeon
2023-03-23
1
-4
/
+7
*
Log DAdam/DAdan d
Volpeon
2023-03-21
1
-2
/
+2
*
Added dadaptation
Volpeon
2023-03-21
1
-0
/
+28
*
Pipeline: Perlin noise for init image
Volpeon
2023-03-04
1
-1
/
+1
*
Removed offset noise from training, added init offset to pipeline
Volpeon
2023-03-03
1
-1
/
+0
*
Implemented different noise offset
Volpeon
2023-03-03
1
-2
/
+2
*
Update
Volpeon
2023-03-01
1
-3
/
+3
*
Don't rely on Accelerate for gradient accumulation
Volpeon
2023-02-21
1
-1
/
+1
*
Embedding normalization: Ignore tensors with grad = 0
Volpeon
2023-02-21
1
-9
/
+2
*
Update
Volpeon
2023-02-18
1
-3
/
+5
*
Added Lion optimizer
Volpeon
2023-02-17
1
-11
/
+27
*
Back to xformers
Volpeon
2023-02-17
1
-2
/
+2
*
Remove xformers, switch to Pytorch Nightly
Volpeon
2023-02-17
1
-2
/
+2
*
Integrated WIP UniPC scheduler
Volpeon
2023-02-16
1
-1
/
+2
*
Update
Volpeon
2023-02-13
1
-5
/
+5
*
Integrate Self-Attention-Guided (SAG) Stable Diffusion in my custom pipeline
Volpeon
2023-02-08
1
-1
/
+1
*
Fixed Lora training
Volpeon
2023-02-08
1
-6
/
+6
*
Add Lora
Volpeon
2023-02-07
1
-45
/
+1
*
Restored LR finder
Volpeon
2023-01-20
1
-1
/
+9
*
Update
Volpeon
2023-01-17
1
-8
/
+6
*
Update
Volpeon
2023-01-17
1
-3
/
+2
*
Training update
Volpeon
2023-01-16
1
-1
/
+5
*
Moved multi-TI code from Dreambooth to TI script
Volpeon
2023-01-16
1
-133
/
+2
*
More training adjustments
Volpeon
2023-01-16
1
-12
/
+59
*
Pad dataset if len(items) < batch_size
Volpeon
2023-01-16
1
-20
/
+20
*
Handle empty validation dataset
Volpeon
2023-01-16
1
-3
/
+3
*
Extended Dreambooth: Train TI tokens separately
Volpeon
2023-01-16
1
-71
/
+76
*
Implemented extended Dreambooth training
Volpeon
2023-01-16
1
-329
/
+155
*
WIP: Modularization ("free(): invalid pointer" my ass)
Volpeon
2023-01-14
1
-2
/
+1
*
Fix
Volpeon
2023-01-14
1
-2
/
+2
*
Cleanup
Volpeon
2023-01-14
1
-21
/
+12
*
Unified training script structure
Volpeon
2023-01-13
1
-127
/
+78
*
Reverted modularization mostly
Volpeon
2023-01-13
1
-1
/
+2
*
More modularization
Volpeon
2023-01-13
1
-207
/
+65
*
Removed PromptProcessor, modularized training loop
Volpeon
2023-01-13
1
-5
/
+2
*
Added TI decay start offset
Volpeon
2023-01-13
1
-1
/
+1
*
Code deduplication
Volpeon
2023-01-13
1
-56
/
+15
*
Update
Volpeon
2023-01-12
1
-14
/
+42
*
TI: Use grad clipping from LoRA #104
Volpeon
2023-01-11
1
-2
/
+2
*
Better defaults
Volpeon
2023-01-10
1
-3
/
+2
*
Fix
Volpeon
2023-01-10
1
-1
/
+1
*
Added arg to disable tag shuffling
Volpeon
2023-01-10
1
-1
/
+8
*
Enable buckets for validation, fixed vaildation repeat arg
Volpeon
2023-01-09
1
-4
/
+1
*
Add --valid_set_repeat
Volpeon
2023-01-09
1
-0
/
+10
*
Improved aspect ratio bucketing
Volpeon
2023-01-08
1
-0
/
+27
*
Fixed aspect ratio bucketing; allow passing token IDs to pipeline
Volpeon
2023-01-08
1
-6
/
+8
*
Improved aspect ratio bucketing
Volpeon
2023-01-08
1
-55
/
+45
*
Cleanup
Volpeon
2023-01-07
1
-134
/
+131
*
Added progressive aspect ratio bucketing
Volpeon
2023-01-07
1
-9
/
+3
*
Update
Volpeon
2023-01-07
1
-3
/
+3
[next]