My fork from stable diffusion forge. Why not optifine? I like forge mods more. I made it cuz I’m too lazy to set it up again and again every time I need to update something/break system/change drives and etc.
Go to file
lllyasviel a0e3b71fab i
2024-01-25 21:01:30 -08:00
.github Don't wait for 10 minutes for test server to come up 2023-12-30 19:44:38 +02:00
configs support for sdxl-inpaint model 2023-12-21 20:15:51 +08:00
embeddings add embeddings dir 2022-09-30 14:16:26 +03:00
extensions delete the submodule dir (why do you keep doing this) 2022-10-29 09:02:02 +03:00
extensions-builtin Create kohya_hrfix.py 2024-01-25 20:56:01 -08:00
html put extra networks controls row into the tabs UI element for #14588 2024-01-22 23:20:30 +03:00
javascript lint 2024-01-23 00:36:31 +03:00
ldm_patched i 2024-01-24 10:03:36 -08:00
localizations Remove old localizations from the main repo. 2022-11-08 10:01:27 +03:00
models Add support for the Variations models (unclip-h and unclip-l) 2023-03-25 21:03:07 -04:00
modules i 2024-01-25 21:01:30 -08:00
modules_forge i 2024-01-25 21:01:30 -08:00
scripts keep postprocessing upscale selected tab after restart 2024-01-20 15:37:49 +04:00
test change import statements for #14478 2023-12-31 22:38:30 +03:00
textual_inversion_templates hypernetwork training mk1 2022-10-07 23:22:22 +03:00
.eslintignore Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
.eslintrc.js add onEdit function for js and rework token-counter.js to use it 2023-10-01 10:15:23 +03:00
.git-blame-ignore-revs Add .git-blame-ignore-revs 2023-05-19 12:34:06 +03:00
.gitignore output path 2024-01-13 21:02:41 -08:00
.pylintrc Add basic Pylint to catch syntax errors on PRs 2022-10-15 16:26:07 +03:00
CHANGELOG.md update changelog 2023-12-04 09:35:52 +03:00
CITATION.cff citation mk2 2023-08-21 15:27:04 +03:00
CODEOWNERS Update CODEOWNERS 2024-01-25 17:02:35 -08:00
environment-wsl2.yaml update xformers 2023-04-03 15:23:35 -04:00
launch.py add --dump-sysinfo, a cmd arg to dump limited sysinfo file at startup 2023-08-30 19:48:47 +03:00
LICENSE.txt add license file 2023-01-15 09:24:48 +03:00
package.json Add basic ESLint configuration for formatting 2023-05-17 16:09:06 +03:00
pyproject.toml Update ruff to 0.1.6 2023-11-22 18:05:12 +02:00
README.md Update README.md 2024-01-25 18:35:22 -08:00
requirements_versions.txt Merge pull request #14471 from akx/bump-numpy 2023-12-31 01:37:56 +03:00
requirements-test.txt Overhaul tests to use py.test 2023-05-19 17:42:34 +03:00
requirements.txt Drop dependency on basicsr 2023-12-30 17:53:19 +02:00
screenshot.png new screenshot 2023-01-07 13:30:06 +03:00
script.js Remove unnecessary 'else', add 'lightboxModal' check 2023-12-11 18:06:08 +06:00
style.css put extra networks controls row into the tabs UI element for #14588 2024-01-22 23:20:30 +03:00
webui-macos-env.sh Update webui-macos-env.sh 2023-11-20 14:05:32 +08:00
webui-user.bat revert change to webui-user.bat 2022-11-19 21:06:58 +03:00
webui-user.sh Vendor in the single module used from taming_transformers; remove taming_transformers dependency 2023-05-30 12:47:57 +03:00
webui.bat support webui.settings.bat 2023-10-15 04:36:27 +09:00
webui.py update #14354 2023-12-30 15:12:48 +03:00
webui.sh Merge pull request #14227 from kingljl/kingljl-patch-memory-leak 2023-12-16 11:24:07 +03:00

This is a Private Project

Currently, we are only sending invitations to people who may be interested in development of this project.

Please do not share codes or info from this project to public.

If you see this, please join our private Discord server for discussion: https://discord.gg/eTfuzT2z

Stable Diffusion Web UI Forge

Stable Diffusion Web UI Forge is a platform on top of Stable Diffusion WebUI to make development easier, and optimize the speed and resource consumption.

The name "Forge" is inspired from "Minecraft Forge". This project will become SD WebUI's Forge.

Forge will give you:

  1. Improved optimization. (Fastest speed and minimal memory use among all alternative software.)
  2. Patchable UNet and CLIP objects. (Developer-friendly platform.)

Improved Optimization

I tested with several devices, and this is a typical result from 8GB VRAM (3070ti laptop) with SDXL.

This is WebUI:

image

image

image

image

(average about 7.4GB/8GB, peak at about 7.9GB/8GB)

This is WebUI Forge:

image

image

image

image

(average and peak are all 6.3GB/8GB)

Also, you can see that Forge does not change WebUI results. Installing Forge is not a seed breaking change.

We do not change any UI. But you will see the version of Forge here

image

"f0.0.1v1.7.0" means WebUI 1.7.0 with Forge 0.0.1

Changes

Forge removes all WebUI's codes related to speed and memory optimization and reworked everything.

All previous cmd flags like medvram, lowvram, medvram-sdxl, precision full, no half, no half vae, attention_xxx, upcast unet, ... are all REMOVED. Adding these flags will not cause error but they will not do anything now. We highly encourage Forge users to remove all cmd flags and let Forge to decide how to load models.

Currently, the behaviors is:

"When loading a model to GPU, Forge will decide whether to load the entire model, or to load separated parts of the model. Then, when loading another model, Forge will try best to unload the previous model."

The only one flag that you may still need is --disable-offload-from-vram, to change the above behavior to

"When loading a model to GPU, Forge will decide whether to load the entire model, or to load separated parts of the model. Then, when loading another model, Forge will try best to keep the previous model in GPU without unloading it."

You should --disable-offload-from-vram when and only when you have more than 20GB GPU memory, or when you are on MAC MPS.

If you really want to play with cmd flags, you can additionally control the GPU with:

(extreme VRAM cases)

--always-gpu
--always-cpu

(rare attention cases)

--attention-split
--attention-quad
--attention-pytorch
--disable-xformers
--disable-attention-upcast

(float point type)

--all-in-fp32
--all-in-fp16
--unet-in-bf16
--unet-in-fp16
--unet-in-fp8-e4m3fn
--unet-in-fp8-e5m2
--vae-in-fp16
--vae-in-fp32
--vae-in-bf16
--clip-in-fp8-e4m3fn
--clip-in-fp8-e5m2
--clip-in-fp16
--clip-in-fp32

(rare platforms)

--directml
--disable-ipex-hijack
--pytorch-deterministic

Again, Forge do not recommend users to use any cmd flags unless you are very sure that you really need these.

Patchable UNet

Now developing an extension is super simple. We finally have patchable UNet.

Below is using one single file with 80 lines of codes to support FreeU:

extensions-builtin/sd_forge_freeu/scripts/forge_freeu.py

import torch
import gradio as gr
from modules import scripts


def Fourier_filter(x, threshold, scale):
    x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
    x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))
    B, C, H, W = x_freq.shape
    mask = torch.ones((B, C, H, W), device=x.device)
    crow, ccol = H // 2, W //2
    mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
    x_freq = x_freq * mask
    x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
    x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real
    return x_filtered.to(x.dtype)


def set_freeu_v2_patch(model, b1, b2, s1, s2):
    model_channels = model.model.model_config.unet_config["model_channels"]
    scale_dict = {model_channels * 4: (b1, s1), model_channels * 2: (b2, s2)}

    def output_block_patch(h, hsp, *args, **kwargs):
        scale = scale_dict.get(h.shape[1], None)
        if scale is not None:
            hidden_mean = h.mean(1).unsqueeze(1)
            B = hidden_mean.shape[0]
            hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
            hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
            hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / \
                          (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
            h[:, :h.shape[1] // 2] = h[:, :h.shape[1] // 2] * ((scale[0] - 1) * hidden_mean + 1)
            hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
        return h, hsp

    m = model.clone()
    m.set_model_output_block_patch(output_block_patch)
    return m


class FreeUForForge(scripts.Script):
    def title(self):
        return "FreeU Integrated"

    def show(self, is_img2img):
        # make this extension visible in both txt2img and img2img tab.
        return scripts.AlwaysVisible

    def ui(self, *args, **kwargs):
        with gr.Accordion(open=False, label=self.title()):
            freeu_enabled = gr.Checkbox(label='Enabled', value=False)
            freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01)
            freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02)
            freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99)
            freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)

        return freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2

    def process_batch(self, p, *script_args, **kwargs):
        freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2 = script_args

        if not freeu_enabled:
            return

        unet = p.sd_model.forge_objects.unet

        unet = set_freeu_v2_patch(unet, freeu_b1, freeu_b2, freeu_s1, freeu_s2)

        p.sd_model.forge_objects.unet = unet

        # Below codes will add some logs to the texts below the image outputs on UI.
        # The extra_generation_params does not influence results.
        p.extra_generation_params.update(dict(
            freeu_enabled=freeu_enabled,
            freeu_b1=freeu_b1,
            freeu_b2=freeu_b2,
            freeu_s1=freeu_s1,
            freeu_s2=freeu_s2,
        ))

        return

It looks like this:

image

Similar components like HyperTile, KohyaHighResFix, SAG, can all be implemented within 100 lines of codes (see also the codes).

ControlNets can finally be called by different extensions. (80% codes of ControlNet can be removed now.)

Implementing Stable Video Diffusion and Zero123 are also super simple now (and we will add soon).