Commit Graph

277 Commits

Author SHA1 Message Date
lllyasviel
bde779a526 apply_token_merging 2024-02-23 15:43:27 -08:00
Chenlei Hu
388ca351f4
Revert "Fix ruff linter (#137)" (#143)
This reverts commit 6b3ad64388.
2024-02-08 21:24:04 -05:00
Chenlei Hu
6b3ad64388
Fix ruff linter (#137)
* Fix ruff linter

* Remove unused imports

* Remove unused imports
2024-02-08 20:35:20 -05:00
lllyasviel
d76b830add reduce prints 2024-02-06 07:56:15 -08:00
lllyasviel
1ecbff15fa add note about token merging 2024-02-05 21:55:59 -08:00
lllyasviel
049020f3c5 smaller default model 2024-02-04 18:54:50 -08:00
lllyasviel
a3b4f9df29 Update sd_models.py 2024-01-25 08:03:28 -08:00
lllyasviel
618a479db7 Update sd_models.py 2024-01-25 08:01:48 -08:00
lllyasviel
5ff6315e44 Update sd_models.py 2024-01-25 07:26:04 -08:00
lllyasviel
f75bf1ba50 Update sd_models.py 2024-01-25 07:24:15 -08:00
lllyasviel
232c90aba2 Update sd_models.py 2024-01-25 07:18:15 -08:00
lllyasviel
f993ab1c2c Update sd_models.py 2024-01-25 07:15:51 -08:00
lllyasviel
02f4ba8ee9 Update sd_models.py 2024-01-25 07:15:27 -08:00
lllyasviel
8e36af0f06 Update sd_models.py 2024-01-25 06:43:33 -08:00
lllyasviel
6b88c82733 Update sd_models.py 2024-01-25 06:34:28 -08:00
lllyasviel
30d1f64ce6 Update sd_models.py 2024-01-25 06:31:12 -08:00
lllyasviel
485e5ac1cc Update sd_models.py 2024-01-25 06:29:33 -08:00
lllyasviel
854997c163 Update sd_models.py 2024-01-25 04:42:39 -08:00
lllyasviel
b781e7f80f i 2024-01-25 03:45:58 -08:00
lllyasviel
4e5ba653c6 i 2024-01-24 11:03:36 -08:00
lllyasviel
d622806088 Update sd_models.py 2024-01-16 02:56:02 -08:00
lllyasviel
4cd111c9ad Update sd_models.py 2024-01-16 02:54:08 -08:00
lllyasviel
6ce6236958 Update sd_models.py 2024-01-16 02:46:35 -08:00
lllyasviel
b731bb860c Update webui.py
i

Update initialization.py

initialization

initialization

Update initialization.py

i

i

Update sd_samplers_common.py

Update sd_hijack.py

i

Update sd_models.py

Update sd_models.py

Update forge_loader.py

Update sd_models.py

i

Update sd_model.py

i

Update sd_models.py

Create sd_model.py

i

i

Update sd_models.py

i

Update sd_models.py

Update sd_models.py

i

i

Update sd_samplers_common.py

i

Update sd_models.py

Update sd_models.py

Update sd_samplers_common.py

Update sd_models.py

Update sd_models.py

Update sd_models.py

Update sd_models.py

Update sd_samplers_common.py

i

Update shared_options.py

Update prompt_parser.py

Update sd_hijack_unet.py

i

Update sd_models.py

Update sd_models.py

Update sd_models.py

Update devices.py

i

Update sd_vae.py

Update sd_models.py

Update processing.py

Update ui_settings.py

Update sd_models_xl.py

i

i

Update sd_samplers_kdiffusion.py

Update sd_samplers_timesteps.py

Update ui_settings.py

Update cmd_args.py

Update cmd_args.py

Update initialization.py

Update shared_options.py

Update initialization.py

Update shared_options.py

i

Update cmd_args.py

Update initialization.py

Update initialization.py

Update initialization.py

Update cmd_args.py

Update cmd_args.py

Update sd_hijack.py
2024-01-16 02:33:39 -08:00
Nuullll
a183de04e3 Execute model_loaded_callback after moving to target device 2024-01-06 20:03:33 +08:00
AUTOMATIC1111
267fd5d76b
Merge pull request #14145 from drhead/zero-terminal-snr
Implement zero terminal SNR noise schedule option
2024-01-01 14:45:12 +03:00
Kohaku-Blueleaf
672dc4efa8 Fix forced reload 2023-12-06 15:16:10 +08:00
drhead
78acdcf677 fix variable 2023-12-02 14:09:18 -05:00
drhead
dc1adeecdd Create alphas_cumprod_original on full precision path 2023-12-02 14:06:56 -05:00
Kohaku-Blueleaf
50a21cb09f Ensure the cached weight will not be affected 2023-12-02 22:06:47 +08:00
Kohaku-Blueleaf
110485d5bb Merge branch 'dev' into test-fp8 2023-12-02 17:00:09 +08:00
MrCheeze
6080045b2a Add support for SD 2.1 Turbo, by converting the state dict from SGM to LDM on load 2023-12-01 22:58:05 -05:00
drhead
b25c126ccd
Protect alphas_cumprod from downcasting 2023-11-29 17:38:53 -05:00
Kohaku-Blueleaf
40ac134c55 Fix pre-fp8 2023-11-25 12:35:09 +08:00
Kohaku-Blueleaf
370a77f8e7 Option for using fp16 weight when apply lora 2023-11-21 19:59:34 +08:00
Kohaku-Blueleaf
598da5cd49 Use options instead of cmd_args 2023-11-19 15:50:06 +08:00
Kohaku-Blueleaf
cd12256575 Merge branch 'dev' into test-fp8 2023-11-16 21:53:13 +08:00
AUTOMATIC1111
6ad666e479 more changes for #13865: fix formatting, rename the function, add comment and add a readme entry 2023-11-05 19:46:20 +03:00
AUTOMATIC1111
80d639a440 linter 2023-11-05 19:32:21 +03:00
AUTOMATIC1111
ff805d8d0e
Merge branch 'dev' into master 2023-11-05 19:30:57 +03:00
Ritesh Gangnani
44c5097375 Use devices.torch_gc() instead of empty_cache() 2023-11-05 20:31:57 +05:30
Ritesh Gangnani
ff1609f91e Add SSD-1B as a supported model 2023-11-05 19:13:49 +05:30
Kohaku-Blueleaf
d4d3134f6d ManualCast for 10/16 series gpu 2023-10-28 15:24:26 +08:00
Kohaku-Blueleaf
dda067f64d ignore mps for fp8 2023-10-25 19:53:22 +08:00
Kohaku-Blueleaf
bf5067f50c Fix alphas cumprod 2023-10-25 12:54:28 +08:00
Kohaku-Blueleaf
4830b25136 Fix alphas_cumprod dtype 2023-10-25 11:53:37 +08:00
Kohaku-Blueleaf
1df6c8bfec fp8 for TE 2023-10-25 11:36:43 +08:00
Kohaku-Blueleaf
9c1eba2af3 Fix lint 2023-10-24 02:11:27 +08:00
Kohaku-Blueleaf
eaa9f5162f Add CPU fp8 support
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear)

And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-24 01:49:05 +08:00
Kohaku-Blueleaf
5f9ddfa46f Add sdxl only arg 2023-10-19 23:57:22 +08:00