summaryrefslogtreecommitdiffstats
path: root/training/strategy
Commit message (Expand)AuthorAgeFilesLines
* FixVolpeon2023-04-011-1/+3
* Combined TI with embedding and LoRAVolpeon2023-04-011-58/+18
* Experimental: TI via LoRAVolpeon2023-04-011-26/+4
* Fix TIVolpeon2023-03-271-8/+10
* Sparse TI embeddings without sparse tensorsVolpeon2023-03-271-10/+8
* Improved TI embeddingsVolpeon2023-03-261-2/+1
* Fixed Lora training perf issueVolpeon2023-03-241-7/+8
* Lora fix: Save config JSON, tooVolpeon2023-03-241-0/+3
* Refactoring, fixed Lora trainingVolpeon2023-03-243-58/+30
* UpdateVolpeon2023-03-233-12/+12
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-11/+59
* UpdateVolpeon2023-03-071-14/+11
* UpdateVolpeon2023-03-012-4/+4
* Fixed TI normalization orderVolpeon2023-02-212-11/+15
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-6/+0
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-4/+11
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-173-7/+7
* UpdateVolpeon2023-02-132-2/+2
* Fixed Lora trainingVolpeon2023-02-081-18/+5
* Fix Lora memory usageVolpeon2023-02-073-7/+1
* Add LoraVolpeon2023-02-073-17/+203
* Restored LR finderVolpeon2023-01-202-6/+3
* Move Accelerator preparation into strategyVolpeon2023-01-192-2/+34
* UpdateVolpeon2023-01-172-9/+10
* FixVolpeon2023-01-171-4/+5
* FixVolpeon2023-01-171-1/+0
* Make embedding decay work like Adam decayVolpeon2023-01-171-9/+5
* UpdateVolpeon2023-01-172-8/+8
* UpdateVolpeon2023-01-172-8/+21
* Training updateVolpeon2023-01-162-9/+14
* More training adjustmentsVolpeon2023-01-161-1/+1
* Handle empty validation datasetVolpeon2023-01-162-2/+2
* Added Dreambooth strategyVolpeon2023-01-151-0/+183
* UpdateVolpeon2023-01-151-28/+26
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-151-10/+10
* Added functional TI strategyVolpeon2023-01-151-0/+164