my-sd/README.md

635 lines
27 KiB
Markdown
Raw Normal View History

2024-01-26 01:15:22 +00:00
# This is a Private Project
2024-01-26 01:02:03 +00:00
Currently, we are only sending invitations to people who may be interested in development of this project.
Please do not share codes or info from this project to public.
2024-02-01 06:01:31 +00:00
If you see this, please join our private Discord server for discussion: https://discord.gg/rgZdjgrDTS
2024-01-26 01:02:03 +00:00
2024-01-26 00:51:48 +00:00
# Stable Diffusion Web UI Forge
2024-01-26 01:15:22 +00:00
Stable Diffusion Web UI Forge is a platform on top of [Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to make development easier, and optimize the speed and resource consumption.
The name "Forge" is inspired from "Minecraft Forge". This project will become SD WebUI's Forge.
Forge will give you:
1. Improved optimization. (Fastest speed and minimal memory use among all alternative software.)
2. Patchable UNet and CLIP objects. (Developer-friendly platform.)
# Improved Optimization
2024-01-26 02:13:56 +00:00
I tested with several devices, and this is a typical result from 8GB VRAM (3070ti laptop) with SDXL.
2024-01-26 01:49:51 +00:00
2024-01-26 01:50:31 +00:00
**This is WebUI:**
2024-01-26 01:16:17 +00:00
2024-01-26 01:26:52 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/c32baedd-500b-408f-8cfb-ed4570c883bd)
2024-01-26 01:16:17 +00:00
2024-01-26 01:49:51 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/cb6098de-f2d4-4b25-9566-df4302dda396)
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/5447e8b7-f3ca-4003-9961-02027c8181e8)
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/f3cb57d9-ac7a-4667-8b3f-303139e38afa)
(average about 7.4GB/8GB, peak at about 7.9GB/8GB)
2024-01-26 01:50:31 +00:00
**This is WebUI Forge:**
2024-01-26 01:16:17 +00:00
2024-01-26 01:26:52 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/0c45cd98-0b14-42c3-9556-28e48d4d5fa0)
2024-01-26 01:49:51 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/3a71f5d4-39e5-4ab1-81cf-8eaa790a2dc8)
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/65fbb4a5-ee73-4bb9-9c5f-8a958cd9674d)
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/76f181a1-c5fb-4323-a6cc-b6308a45587e)
(average and peak are all 6.3GB/8GB)
Also, you can see that Forge does not change WebUI results. Installing Forge is not a seed breaking change.
We do not change any UI. But you will see the version of Forge here
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/93fdbccf-2f9b-4d45-ad81-c7c4106a357b)
"f0.0.1v1.7.0" means WebUI 1.7.0 with Forge 0.0.1
2024-01-26 02:14:25 +00:00
### Changes
2024-01-26 01:26:52 +00:00
2024-01-27 19:38:37 +00:00
Forge removes all WebUI's codes related to speed and memory optimization and reworked everything. All previous cmd flags like medvram, lowvram, medvram-sdxl, precision full, no half, no half vae, attention_xxx, upcast unet, ... are all REMOVED. Adding these flags will not cause error but they will not do anything now. **We highly encourage Forge users to remove all cmd flags and let Forge to decide how to load models.**
2024-01-26 01:16:17 +00:00
2024-01-27 19:38:37 +00:00
Without any cmd flag, Forge can run SDXL with 4GB vram and SD1.5 with 2GB vram.
2024-01-26 02:13:56 +00:00
2024-01-27 19:38:37 +00:00
**The only one flag that you may still need** is `--always-offload-from-vram` (This flag will make things **slower**). This option will let Forge always unload models from VRAM. This can be useful is you use multiple software together and want Forge to use less VRAM and give some vram to other software, or when you are using some old extensions that will compete vram with main UI, or (very rarely) when you get OOM.
2024-01-26 02:13:56 +00:00
If you really want to play with cmd flags, you can additionally control the GPU with:
(extreme VRAM cases)
--always-gpu
--always-cpu
(rare attention cases)
--attention-split
--attention-quad
--attention-pytorch
--disable-xformers
--disable-attention-upcast
(float point type)
--all-in-fp32
--all-in-fp16
--unet-in-bf16
--unet-in-fp16
--unet-in-fp8-e4m3fn
--unet-in-fp8-e5m2
--vae-in-fp16
--vae-in-fp32
--vae-in-bf16
--clip-in-fp8-e4m3fn
--clip-in-fp8-e5m2
--clip-in-fp16
--clip-in-fp32
(rare platforms)
--directml
--disable-ipex-hijack
--pytorch-deterministic
Again, Forge do not recommend users to use any cmd flags unless you are very sure that you really need these.
2024-01-26 02:21:56 +00:00
# Patchable UNet
2024-01-26 02:29:46 +00:00
Now developing an extension is super simple. We finally have patchable UNet.
2024-01-26 02:21:56 +00:00
Below is using one single file with 80 lines of codes to support FreeU:
`extensions-builtin/sd_forge_freeu/scripts/forge_freeu.py`
```python
import torch
import gradio as gr
from modules import scripts
def Fourier_filter(x, threshold, scale):
x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))
B, C, H, W = x_freq.shape
mask = torch.ones((B, C, H, W), device=x.device)
crow, ccol = H // 2, W //2
mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
x_freq = x_freq * mask
x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real
return x_filtered.to(x.dtype)
def set_freeu_v2_patch(model, b1, b2, s1, s2):
model_channels = model.model.model_config.unet_config["model_channels"]
scale_dict = {model_channels * 4: (b1, s1), model_channels * 2: (b2, s2)}
def output_block_patch(h, hsp, *args, **kwargs):
scale = scale_dict.get(h.shape[1], None)
if scale is not None:
hidden_mean = h.mean(1).unsqueeze(1)
B = hidden_mean.shape[0]
hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / \
(hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
h[:, :h.shape[1] // 2] = h[:, :h.shape[1] // 2] * ((scale[0] - 1) * hidden_mean + 1)
hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
return h, hsp
m = model.clone()
m.set_model_output_block_patch(output_block_patch)
return m
class FreeUForForge(scripts.Script):
def title(self):
return "FreeU Integrated"
def show(self, is_img2img):
# make this extension visible in both txt2img and img2img tab.
return scripts.AlwaysVisible
def ui(self, *args, **kwargs):
with gr.Accordion(open=False, label=self.title()):
freeu_enabled = gr.Checkbox(label='Enabled', value=False)
freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01)
freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02)
freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99)
freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)
return freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2
2024-01-28 04:04:10 +00:00
def process_before_every_sampling(self, p, *script_args, **kwargs):
# This will be called before every sampling.
# If you use highres fix, this will be called twice.
2024-01-26 02:21:56 +00:00
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2 = script_args
if not freeu_enabled:
return
unet = p.sd_model.forge_objects.unet
unet = set_freeu_v2_patch(unet, freeu_b1, freeu_b2, freeu_s1, freeu_s2)
p.sd_model.forge_objects.unet = unet
# Below codes will add some logs to the texts below the image outputs on UI.
# The extra_generation_params does not influence results.
p.extra_generation_params.update(dict(
freeu_enabled=freeu_enabled,
freeu_b1=freeu_b1,
freeu_b2=freeu_b2,
freeu_s1=freeu_s1,
freeu_s2=freeu_s2,
))
return
```
2024-01-26 02:23:09 +00:00
It looks like this:
2024-01-26 02:23:31 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/a7798cf2-057c-43e0-883a-5f8643af8529)
2024-01-26 02:35:22 +00:00
Similar components like HyperTile, KohyaHighResFix, SAG, can all be implemented within 100 lines of codes (see also the codes).
2024-01-26 02:23:09 +00:00
2024-01-26 05:14:35 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/e2fc1b73-e6ee-405e-864c-c67afd92a1db)
2024-01-26 02:27:52 +00:00
2024-01-26 05:14:35 +00:00
ControlNets can finally be called by different extensions. (80% codes of ControlNet can be removed now, will start soon)
Implementing Stable Video Diffusion and Zero123 are also super simple now (see also the codes).
2024-01-26 08:35:43 +00:00
*Stable Video Diffusion:*
2024-01-26 18:00:26 +00:00
`extensions-builtin/sd_forge_svd/scripts/forge_svd.py`
2024-01-26 08:41:31 +00:00
```python
import torch
import gradio as gr
import os
import pathlib
from modules import script_callbacks
from modules.paths import models_path
from modules.ui_common import ToolButton, refresh_symbol
from modules import shared
from modules_forge.forge_util import numpy_to_pytorch, pytorch_to_numpy
from ldm_patched.modules.sd import load_checkpoint_guess_config
from ldm_patched.contrib.external_video_model import VideoLinearCFGGuidance, SVD_img2vid_Conditioning
from ldm_patched.contrib.external import KSampler, VAEDecode
opVideoLinearCFGGuidance = VideoLinearCFGGuidance()
opSVD_img2vid_Conditioning = SVD_img2vid_Conditioning()
opKSampler = KSampler()
opVAEDecode = VAEDecode()
svd_root = os.path.join(models_path, 'svd')
os.makedirs(svd_root, exist_ok=True)
svd_filenames = []
def update_svd_filenames():
global svd_filenames
svd_filenames = [
pathlib.Path(x).name for x in
shared.walk_files(svd_root, allowed_extensions=[".pt", ".ckpt", ".safetensors"])
]
return svd_filenames
@torch.inference_mode()
@torch.no_grad()
def predict(filename, width, height, video_frames, motion_bucket_id, fps, augmentation_level,
sampling_seed, sampling_steps, sampling_cfg, sampling_sampler_name, sampling_scheduler,
sampling_denoise, guidance_min_cfg, input_image):
filename = os.path.join(svd_root, filename)
model_raw, _, vae, clip_vision = \
load_checkpoint_guess_config(filename, output_vae=True, output_clip=False, output_clipvision=True)
model = opVideoLinearCFGGuidance.patch(model_raw, guidance_min_cfg)[0]
init_image = numpy_to_pytorch(input_image)
positive, negative, latent_image = opSVD_img2vid_Conditioning.encode(
clip_vision, init_image, vae, width, height, video_frames, motion_bucket_id, fps, augmentation_level)
output_latent = opKSampler.sample(model, sampling_seed, sampling_steps, sampling_cfg,
sampling_sampler_name, sampling_scheduler, positive,
negative, latent_image, sampling_denoise)[0]
output_pixels = opVAEDecode.decode(vae, output_latent)[0]
outputs = pytorch_to_numpy(output_pixels)
return outputs
def on_ui_tabs():
with gr.Blocks() as svd_block:
with gr.Row():
with gr.Column():
input_image = gr.Image(label='Input Image', source='upload', type='numpy', height=400)
with gr.Row():
filename = gr.Dropdown(label="SVD Checkpoint Filename",
choices=svd_filenames,
value=svd_filenames[0] if len(svd_filenames) > 0 else None)
refresh_button = ToolButton(value=refresh_symbol, tooltip="Refresh")
refresh_button.click(
fn=lambda: gr.update(choices=update_svd_filenames),
inputs=[], outputs=filename)
width = gr.Slider(label='Width', minimum=16, maximum=8192, step=8, value=1024)
height = gr.Slider(label='Height', minimum=16, maximum=8192, step=8, value=576)
video_frames = gr.Slider(label='Video Frames', minimum=1, maximum=4096, step=1, value=14)
motion_bucket_id = gr.Slider(label='Motion Bucket Id', minimum=1, maximum=1023, step=1, value=127)
fps = gr.Slider(label='Fps', minimum=1, maximum=1024, step=1, value=6)
augmentation_level = gr.Slider(label='Augmentation Level', minimum=0.0, maximum=10.0, step=0.01,
value=0.0)
sampling_steps = gr.Slider(label='Sampling Steps', minimum=1, maximum=200, step=1, value=20)
sampling_cfg = gr.Slider(label='CFG Scale', minimum=0.0, maximum=50.0, step=0.1, value=2.5)
sampling_denoise = gr.Slider(label='Sampling Denoise', minimum=0.0, maximum=1.0, step=0.01, value=1.0)
guidance_min_cfg = gr.Slider(label='Guidance Min Cfg', minimum=0.0, maximum=100.0, step=0.5, value=1.0)
sampling_sampler_name = gr.Radio(label='Sampler Name',
choices=['euler', 'euler_ancestral', 'heun', 'heunpp2', 'dpm_2',
'dpm_2_ancestral', 'lms', 'dpm_fast', 'dpm_adaptive',
'dpmpp_2s_ancestral', 'dpmpp_sde', 'dpmpp_sde_gpu',
'dpmpp_2m', 'dpmpp_2m_sde', 'dpmpp_2m_sde_gpu',
'dpmpp_3m_sde', 'dpmpp_3m_sde_gpu', 'ddpm', 'lcm', 'ddim',
'uni_pc', 'uni_pc_bh2'], value='euler')
sampling_scheduler = gr.Radio(label='Scheduler',
choices=['normal', 'karras', 'exponential', 'sgm_uniform', 'simple',
'ddim_uniform'], value='karras')
sampling_seed = gr.Number(label='Seed', value=12345, precision=0)
generate_button = gr.Button(value="Generate")
ctrls = [filename, width, height, video_frames, motion_bucket_id, fps, augmentation_level,
sampling_seed, sampling_steps, sampling_cfg, sampling_sampler_name, sampling_scheduler,
sampling_denoise, guidance_min_cfg, input_image]
with gr.Column():
output_gallery = gr.Gallery(label='Gallery', show_label=False, object_fit='contain',
visible=True, height=1024, columns=4)
generate_button.click(predict, inputs=ctrls, outputs=[output_gallery])
return [(svd_block, "SVD", "svd")]
update_svd_filenames()
script_callbacks.on_ui_tabs(on_ui_tabs)
```
Note that although the above codes look like independent codes, they actually will automatically offload/unload any other models. For example, below is me opening webui, load SDXL, generated an image, then go to SVD, then generated image frames. You can see that the GPU memory is perfectly managed and the SDXL is moved to RAM then SVD is moved to GPU.
Note that this management is fully automatic. This makes writing extensions super simple.
2024-01-26 08:35:43 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/ac7ed152-cd33-4645-94af-4c43bb8c3d88)
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/cdcb23ad-02dc-4e39-be74-98e927550ef6)
2024-01-26 16:02:52 +00:00
Similarly, Zero123:
2024-01-26 16:03:32 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/d1a4a17d-f382-442d-91f2-fc5b6c10737f)
2024-01-26 16:02:52 +00:00
2024-01-28 06:54:49 +00:00
### Write a simple ControlNet:
2024-01-28 06:58:06 +00:00
Below is a simple extension to have a completely independent pass of ControlNet that never conflicts any other extensions:
`extensions-builtin/sd_forge_controlnet_example/scripts/sd_forge_controlnet_example.py`
Note that this extension is hidden because it is only for developers. To see it in UI, use `--show-controlnet-example`.
2024-01-28 15:35:57 +00:00
The memory optimization in this example is fully automatic. You do not need to care about memory and inference speed, but you may want to cache objects if you wish.
2024-01-28 06:58:06 +00:00
```python
# Use --show-controlnet-example to see this extension.
import cv2
import gradio as gr
2024-02-01 07:29:56 +00:00
import torch
2024-01-28 06:58:06 +00:00
from modules import scripts
from modules.shared_cmd_options import cmd_opts
from modules_forge.shared import supported_preprocessors
2024-01-28 06:58:06 +00:00
from modules.modelloader import load_file_from_url
from ldm_patched.modules.controlnet import load_controlnet
from modules_forge.controlnet import apply_controlnet_advanced
from modules_forge.forge_util import numpy_to_pytorch
from modules_forge.shared import controlnet_dir
2024-01-28 06:58:06 +00:00
class ControlNetExampleForge(scripts.Script):
model = None
def title(self):
return "ControlNet Example for Developers"
def show(self, is_img2img):
# make this extension visible in both txt2img and img2img tab.
return scripts.AlwaysVisible
def ui(self, *args, **kwargs):
with gr.Accordion(open=False, label=self.title()):
gr.HTML('This is an example controlnet extension for developers.')
gr.HTML('You see this extension because you used --show-controlnet-example')
input_image = gr.Image(source='upload', type='numpy')
funny_slider = gr.Slider(label='This slider does nothing. It just shows you how to transfer parameters.',
minimum=0.0, maximum=1.0, value=0.5)
return input_image, funny_slider
def process(self, p, *script_args, **kwargs):
input_image, funny_slider = script_args
# This slider does nothing. It just shows you how to transfer parameters.
del funny_slider
if input_image is None:
return
2024-01-28 08:07:10 +00:00
# controlnet_canny_path = load_file_from_url(
# url='https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/sai_xl_canny_256lora.safetensors',
# model_dir=model_dir,
# file_name='sai_xl_canny_256lora.safetensors'
# )
2024-01-28 06:58:06 +00:00
controlnet_canny_path = load_file_from_url(
url='https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/control_v11p_sd15_canny_fp16.safetensors',
model_dir=controlnet_dir,
2024-01-28 06:58:06 +00:00
file_name='control_v11p_sd15_canny_fp16.safetensors'
)
print('The model [control_v11p_sd15_canny_fp16.safetensors] download finished.')
self.model = load_controlnet(controlnet_canny_path)
print('Controlnet loaded.')
return
def process_before_every_sampling(self, p, *script_args, **kwargs):
# This will be called before every sampling.
# If you use highres fix, this will be called twice.
input_image, funny_slider = script_args
if input_image is None or self.model is None:
return
B, C, H, W = kwargs['noise'].shape # latent_shape
height = H * 8
width = W * 8
2024-01-28 15:02:58 +00:00
batch_size = p.batch_size
2024-01-28 06:58:06 +00:00
preprocessor = supported_preprocessors['canny']
2024-01-29 16:47:50 +00:00
# detect control at certain resolution
control_image = preprocessor(
input_image, resolution=512, slider_1=100, slider_2=200, slider_3=None)
# here we just use nearest neighbour to align input shape.
# You may want crop and resize, or crop and fill, or others.
control_image = cv2.resize(
control_image, (width, height), interpolation=cv2.INTER_NEAREST)
2024-01-28 06:58:06 +00:00
# Output preprocessor result. Now called every sampling. Cache in your own way.
2024-01-29 16:47:50 +00:00
p.extra_result_images.append(control_image)
2024-01-28 06:58:06 +00:00
print('Preprocessor Canny finished.')
2024-01-29 16:47:50 +00:00
control_image_bchw = numpy_to_pytorch(control_image).movedim(-1, 1)
2024-01-28 06:58:06 +00:00
unet = p.sd_model.forge_objects.unet
2024-01-28 15:27:51 +00:00
# Unet has input, middle, output blocks, and we can give different weights
# to each layers in all blocks.
2024-01-28 15:02:58 +00:00
# Below is an example for stronger control in middle block.
# This is helpful for some high-res fix passes. (p.is_hr_pass)
positive_advanced_weighting = {
'input': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2],
'middle': [1.0],
2024-01-28 16:03:36 +00:00
'output': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2]
2024-01-28 15:02:58 +00:00
}
negative_advanced_weighting = {
2024-01-28 17:02:50 +00:00
'input': [0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.05, 1.15, 1.25],
'middle': [1.05],
'output': [0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, 0.95, 1.05, 1.15, 1.25]
2024-01-28 15:02:58 +00:00
}
# The advanced_frame_weighting is a weight applied to each image in a batch.
# The length of this list must be same with batch size
2024-01-28 15:56:42 +00:00
# For example, if batch size is 5, the below list is [0.2, 0.4, 0.6, 0.8, 1.0]
2024-01-28 15:27:51 +00:00
# If you view the 5 images as 5 frames in a video, this will lead to
2024-01-28 15:05:32 +00:00
# progressively stronger control over time.
2024-01-28 15:56:42 +00:00
advanced_frame_weighting = [float(i + 1) / float(batch_size) for i in range(batch_size)]
2024-01-28 15:02:58 +00:00
2024-01-28 15:27:51 +00:00
# The advanced_sigma_weighting allows you to dynamically compute control
# weights given diffusion timestep (sigma).
# For example below code can softly make beginning steps stronger than ending steps.
2024-01-28 16:09:28 +00:00
sigma_max = unet.model.model_sampling.sigma_max
sigma_min = unet.model.model_sampling.sigma_min
2024-01-28 15:27:51 +00:00
advanced_sigma_weighting = lambda s: (s - sigma_min) / (sigma_max - sigma_min)
2024-02-01 07:29:56 +00:00
# You can even input a tensor to mask all control injections
# The mask will be automatically resized during inference in UNet.
# The size should be B 1 H W and the H and W are not important
# because they will be resized automatically
advanced_mask_weighting = torch.ones(size=(1, 1, 512, 512))
2024-01-28 15:02:58 +00:00
# But in this simple example we do not use them
positive_advanced_weighting = None
negative_advanced_weighting = None
advanced_frame_weighting = None
2024-01-28 15:29:29 +00:00
advanced_sigma_weighting = None
2024-02-01 07:29:56 +00:00
advanced_mask_weighting = None
2024-01-28 15:02:58 +00:00
2024-01-29 16:47:50 +00:00
unet = apply_controlnet_advanced(unet=unet, controlnet=self.model, image_bchw=control_image_bchw,
2024-01-28 06:58:06 +00:00
strength=0.6, start_percent=0.0, end_percent=0.8,
2024-01-28 15:02:58 +00:00
positive_advanced_weighting=positive_advanced_weighting,
negative_advanced_weighting=negative_advanced_weighting,
2024-01-28 15:27:51 +00:00
advanced_frame_weighting=advanced_frame_weighting,
2024-02-01 07:29:56 +00:00
advanced_sigma_weighting=advanced_sigma_weighting,
advanced_mask_weighting=advanced_mask_weighting)
2024-01-28 06:58:06 +00:00
p.sd_model.forge_objects.unet = unet
# Below codes will add some logs to the texts below the image outputs on UI.
# The extra_generation_params does not influence results.
p.extra_generation_params.update(dict(
controlnet_info='You should see these texts below output images!',
))
return
# Use --show-controlnet-example to see this extension.
if not cmd_opts.show_controlnet_example:
del ControlNetExampleForge
```
2024-01-28 06:54:49 +00:00
![image](https://github.com/lllyasviel/stable-diffusion-webui-forge/assets/19834515/0a703d8b-27df-4608-8b12-aff750f20ffa)
2024-01-26 17:40:41 +00:00
### Add a preprocessor
Below is the full codes to add a normalbae preprocessor with perfect memory managements.
You can use arbitrary independent extensions to add a preprocessor.
Your preprocessor will be read by all other extensions using `modules_forge.shared.preprocessors`
Below codes are in `extensions-builtin\forge_preprocessor_normalbae\scripts\preprocessor_normalbae.py`
```python
from modules_forge.supported_preprocessor import Preprocessor, PreprocessorParameter
from modules_forge.shared import preprocessor_dir, add_supported_preprocessor
from modules_forge.forge_util import resize_image_with_pad
2024-01-29 00:19:21 +00:00
from modules.modelloader import load_file_from_url
import types
import torch
import numpy as np
from einops import rearrange
from annotator.normalbae.models.NNET import NNET
from annotator.normalbae import load_checkpoint
from torchvision import transforms
class PreprocessorNormalBae(Preprocessor):
def __init__(self):
super().__init__()
self.name = 'normalbae'
2024-01-29 15:38:38 +00:00
self.tags = ['NormalMap']
2024-01-30 02:07:16 +00:00
self.model_filename_filters = ['normal']
self.slider_resolution = PreprocessorParameter(
label='Resolution', minimum=128, maximum=2048, value=512, step=8, visible=True)
self.slider_1 = PreprocessorParameter(visible=False)
self.slider_2 = PreprocessorParameter(visible=False)
self.slider_3 = PreprocessorParameter(visible=False)
self.show_control_mode = True
self.do_not_need_model = False
self.sorting_priority = 100 # higher goes to top in the list
def load_model(self):
if self.model_patcher is not None:
return
model_path = load_file_from_url(
"https://huggingface.co/lllyasviel/Annotators/resolve/main/scannet.pt",
model_dir=preprocessor_dir)
args = types.SimpleNamespace()
args.mode = 'client'
args.architecture = 'BN'
args.pretrained = 'scannet'
args.sampling_ratio = 0.4
args.importance_ratio = 0.7
model = NNET(args)
model = load_checkpoint(model_path, model)
self.norm = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
self.model_patcher = self.setup_model_patcher(model)
def __call__(self, input_image, resolution, slider_1=None, slider_2=None, slider_3=None, **kwargs):
input_image, remove_pad = resize_image_with_pad(input_image, resolution)
self.load_model()
self.move_all_model_patchers_to_gpu()
assert input_image.ndim == 3
image_normal = input_image
with torch.no_grad():
image_normal = self.send_tensor_to_model_device(torch.from_numpy(image_normal))
image_normal = image_normal / 255.0
image_normal = rearrange(image_normal, 'h w c -> 1 c h w')
image_normal = self.norm(image_normal)
normal = self.model_patcher.model(image_normal)
normal = normal[0][-1][:, :3]
normal = ((normal + 1) * 0.5).clip(0, 1)
normal = rearrange(normal[0], 'c h w -> h w c').cpu().numpy()
normal_image = (normal * 255.0).clip(0, 255).astype(np.uint8)
return remove_pad(normal_image)
add_supported_preprocessor(PreprocessorNormalBae())
```
2024-01-26 17:40:41 +00:00
# About Extensions
2024-02-01 05:46:24 +00:00
ControlNet and TiledVAE are integrated, and you should uninstall these two extensions:
sd-webui-controlnet
multidiffusion-upscaler-for-automatic1111
2024-01-26 17:41:57 +00:00
All UI related extensions should work without problems, like:
2024-01-26 17:40:41 +00:00
canvas-zoom
different translations
etc
Below extensions are tested and worked well:
Dynamic Prompts
2024-01-26 17:53:10 +00:00
Adetailer
2024-01-26 18:06:06 +00:00
Ultimate SD Upscale
2024-01-26 18:09:59 +00:00
Reactor
2024-01-26 17:40:41 +00:00
Below extensions will be tested soon:
2024-01-26 18:11:58 +00:00
Regional Prompter (I have not figure out how to use that UI yet.. will test later)
2024-01-26 17:40:41 +00:00
2024-01-26 17:53:36 +00:00
(Tiled diffusion is integrated now and no need to install extra extensions. Also the current smart unet offload is much better than multi-diffusion and people can directly generate 4k images without using multi-diffusion, by automatically offload unet to RAM. If bigger than 4k, use Ultimate SD Upscale.)
2024-01-26 18:13:17 +00:00
(But if you want to use some special features in MultiDiffusion like inversion or region prompt, probably you can still use it, but it can be very rare.)