summaryrefslogtreecommitdiffstats
path: root/training/strategy
Commit message (Collapse)AuthorAgeFilesLines
* UpdateVolpeon2023-04-103-3/+3
|
* UpdateVolpeon2023-04-091-1/+1
|
* UpdateVolpeon2023-04-082-7/+7
|
* Fix TIVolpeon2023-04-081-1/+2
|
* FixVolpeon2023-04-081-3/+2
|
* UpdateVolpeon2023-04-083-5/+11
|
* Fixed Lora PTIVolpeon2023-04-071-16/+19
|
* FixVolpeon2023-04-073-13/+8
|
* FixVolpeon2023-04-072-6/+8
|
* UpdateVolpeon2023-04-071-1/+36
|
* TI: Bring back old embedding decayVolpeon2023-04-041-1/+21
|
* Improved sparse embeddingsVolpeon2023-04-031-4/+4
|
* TI: Delta learningVolpeon2023-04-031-23/+0
|
* Lora: Only register params with grad to optimizerVolpeon2023-04-022-5/+0
|
* RevertVolpeon2023-04-011-19/+81
|
* FixVolpeon2023-04-011-1/+3
|
* Combined TI with embedding and LoRAVolpeon2023-04-011-58/+18
|
* Experimental: TI via LoRAVolpeon2023-04-011-26/+4
|
* Fix TIVolpeon2023-03-271-8/+10
|
* Sparse TI embeddings without sparse tensorsVolpeon2023-03-271-10/+8
|
* Improved TI embeddingsVolpeon2023-03-261-2/+1
|
* Fixed Lora training perf issueVolpeon2023-03-241-7/+8
|
* Lora fix: Save config JSON, tooVolpeon2023-03-241-0/+3
|
* Refactoring, fixed Lora trainingVolpeon2023-03-243-58/+30
|
* UpdateVolpeon2023-03-233-12/+12
|
* Fixed SNR weighting, re-enabled xformersVolpeon2023-03-211-11/+59
|
* UpdateVolpeon2023-03-071-14/+11
|
* UpdateVolpeon2023-03-012-4/+4
|
* Fixed TI normalization orderVolpeon2023-02-212-11/+15
|
* Don't rely on Accelerate for gradient accumulationVolpeon2023-02-211-6/+0
|
* Embedding normalization: Ignore tensors with grad = 0Volpeon2023-02-211-4/+11
|
* Remove xformers, switch to Pytorch NightlyVolpeon2023-02-173-7/+7
|
* UpdateVolpeon2023-02-132-2/+2
|
* Fixed Lora trainingVolpeon2023-02-081-18/+5
|
* Fix Lora memory usageVolpeon2023-02-073-7/+1
|
* Add LoraVolpeon2023-02-073-17/+203
|
* Restored LR finderVolpeon2023-01-202-6/+3
|
* Move Accelerator preparation into strategyVolpeon2023-01-192-2/+34
|
* UpdateVolpeon2023-01-172-9/+10
|
* FixVolpeon2023-01-171-4/+5
|
* FixVolpeon2023-01-171-1/+0
|
* Make embedding decay work like Adam decayVolpeon2023-01-171-9/+5
|
* UpdateVolpeon2023-01-172-8/+8
|
* UpdateVolpeon2023-01-172-8/+21
|
* Training updateVolpeon2023-01-162-9/+14
|
* More training adjustmentsVolpeon2023-01-161-1/+1
|
* Handle empty validation datasetVolpeon2023-01-162-2/+2
|
* Added Dreambooth strategyVolpeon2023-01-151-0/+183
|
* UpdateVolpeon2023-01-151-28/+26
|
* Removed unused code, put training callbacks in dataclassVolpeon2023-01-151-10/+10
|