# 关于 Fooocus

fooocus 是对 Stable Diffusion 的非官方开源实现,其可以轻松实现离线图像生成,并且部署十分方便。

https://github.com/lllyasviel/Fooocus/tree/main

# 下载和安装

进入 Eshell,加载 pytorch dtk23.04 环境

1
2
3
module rm compiler/rocm/2.9
module load compiler/rocm/dtk-23.04
module load apps/DeepLearning/PyTorch/1.13.1/pytorch-1.13.1-py3.9-dtk23.04

克隆官方仓库 (这里同样是我自己创建的镜像仓库)

1
2
git clone https://gitee.com/Cerber2ol8/Fooocus.git
cd Fooocus

克隆 comfyUI 的仓库

1
2
3
4
mkdir repositories && cd repositories
git clone https://gitee.com/Cerber2ol8/ComfyUI.git
mv ComfyUI ComfyUI-from-StabilityAI-Official
cd ..
1
2
# 安装pygit2 但是没什么用,本来计算节点也无法从git clone
pip3 install pygit2==1.12.2

替换 requirements_version.txt 的内容,修改为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
torchsde>=0.2.5
einops>=0.4.1
transformers>=4.30.2
safetensors>=0.3.1
accelerate>=0.21.0
pyyaml>=6.0
Pillow>=9.2.0
scipy>=1.9.3
tqdm>=4.64.1
psutil>=5.9.5
numpy>=1.23.5
pytorch_lightning>=1.9.4
omegaconf>=2.2.3
gradio>=3.39.0
pygit2>=1.12.2

手动安装依赖,launch.py 中也会执行安装,提前安装可以避免由于计算节点没有网络安装失败

1
pip3 install -r requirements_versions.txt

会出现 scikit-image 的冲突,单独安装

1
pip3 install scikit-image

在能访问 huggingface 的主机上手动下载模型权重文件(超算主机无法访问 huggingface)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# diffusion
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors

wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0_0.9vae.safetensors

# lora
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors

# vae
wget https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth

# upscaler
wget https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_upscaler_s409985e5.bin

# expansion
wget https://huggingface.co/lllyasviel/misc/resolve/main/fooocus_expansion.bin

按照 moodules/path.py ,模型权重文件上传到超算对应的目录

1
2
3
4
5
6
7
8
9
# modules/path.py

modelfile_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../models/checkpoints/'))
lorafile_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../models/loras/'))
vae_approx_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../models/vae_approx/'))
upscale_models_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../models/upscale_models/'))
fooocus_expansion_path = os.path.abspath(os.path.join(os.path.dirname(__file__),
'../models/prompt_expansion/fooocus_expansion'))

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# diffusion
models/checkpoints/sd_xl_base_1.0_0.9vae.safetensors
models/checkpoints/sd_xl_refiner_1.0_0.9vae.safetensors

# lora
models/loras/sd_xl_offset_example-lora_1.0.safetensors

# vae
models/vae_approx/xlvaeapp.pth

# upscaler
models/upscale_models/fooocus_upscaler_s409985e5.bin

# expansion
# 注意 这里保存的文件名应为 pytorch_model.bin
models/prompt_expansion/fooocus_expansion/pytorch_model.bin

# 使用容器环境

当前 ac 平台没有符合要求的容器,因此可能需要自己进行环境配置(容器可以开启 8080 端口来访问 webui)

下载镜像到本地

1
docker pull image.sourcefind.cn:5000/dcu/admin/base/pytorch:1.13.1-centos7.6-dtk-23.04-py39-latest

打包镜像

1
docker save -o 1.13.1-centos7.6-dtk-23.04-py39-latest.tar.gz image.sourcefind.cn:5000/dcu/admin/base/pytorch

然后上传 1.13.1-centos7.6-dtk-23.04-py39-latest.tar.gz 到 zz 超算平台(我已经分享该镜像,在镜像仓库中搜索 dtk-23.04 ,镜像名为 1.13.1-centos7.6-dtk-23.04-py39-latest

创建容器实例,核心数 4 以上,32G 内存,1DCU,开启端口 8080,ssh 登录进部署好的容器中。

由于在前文中已经安装好了 pip 包 (包括前文中官方 SD 的包),现在可以直接运行了

在容器环境中

由于 scipy 要求的 numpy(>=1.18.5 and <1.25.0 )版本与镜像自带的 numpy 版本(1.25.0)不兼容,因此需要更换为自己本地目录下的 numpy 引用

修改 PYTHONPATH 为本地目录

1
2
3
4
5
export PYTHONPATH= $usr_home/.local/lib/python3.9/site-packages:$PYTHONPATH # 需要将其中的$usr_home替换当前用户目录
# 查看numpy版本可以发现已经为1.24.4
pip3 show numpy
# Name: numpy
# Version: 1.24.4
1
2
cd Fooocus
python launch.py --port 8080

可能会出现问题与解决方案

  1. ImportError: cannot import name 'load_additional_models' from 'comfy.sample'
1
2
3
4
5
6
7
8
9
10
11
12
13
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/public/home/msliuzy/Fooocus/modules/async_worker.py", line 17, in worker
import modules.default_pipeline as pipeline
File "/public/home/msliuzy/Fooocus/modules/default_pipeline.py", line 1, in <module>
import modules.core as core
File "/public/home/msliuzy/Fooocus/modules/core.py", line 12, in <module>
from comfy.sample import prepare_mask, broadcast_cond, load_additional_models, cleanup_additional_models
ImportError: cannot import name 'load_additional_models' from 'comfy.sample' (/public/home/msliuzy/Fooocus/repositories/ComfyUI-from-StabilityAI-Official/comfy/sample.py)

目测是 Fooocu 项目的 bug,要修改也好办,直接在 modules/core.py 文件中

load_additional_models 改为 get_additional_models ,或者在 repositories/ComfyUI-from-StabilityAI-Official/comfy/sample.py 的第 67 行添加下面代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
def load_additional_models(positive, negative, dtype):
"""loads additional models in positive and negative conditioning"""
control_nets = set(get_models_from_cond(positive, "control") + get_models_from_cond(negative, "control"))

inference_memory = 0
control_models = []
for m in control_nets:
control_models += m.get_models()
inference_memory += m.inference_memory_requirements(dtype)

gligen = get_models_from_cond(positive, "gligen") + get_models_from_cond(negative, "gligen")
gligen = [x[1] for x in gligen]
models = control_models + gligen
return models, inference_memory
  1. TypeError: unsupported operand type(s) for |: 'types.GenericAlias' and 'type'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/public/home/msliuzy/Fooocus/modules/async_worker.py", line 17, in worker
import modules.default_pipeline as pipeline
File "/public/home/msliuzy/Fooocus/modules/default_pipeline.py", line 1, in <module>
import modules.core as core
File "/public/home/msliuzy/Fooocus/modules/core.py", line 15, in <module>
from modules.patch import patch_all
File "/public/home/msliuzy/Fooocus/modules/patch.py", line 7, in <module>
import modules.anisotropic as anisotropic
File "/public/home/msliuzy/Fooocus/modules/anisotropic.py", line 10, in <module>
def _compute_zero_padding(kernel_size: tuple[int, int] | int) -> tuple[int, int]:
TypeError: unsupported operand type(s) for |: 'types.GenericAlias' and 'type'

出现上述问题的原因是相应的代码使用了 python 3.10 的新特性:新的类型联合运算符,而我们目前的镜像只能支持到 python 3.9

python3.10 中引入了启用 X | Y 语法的类型联合运算符。这提供了一种表示 ' 类型 X 或类型 Y' 的相比使用 typing.Union 更清晰的方式,特别是在类型提示中。在之前的 Python 版本中,要为可接受多种类型参数的函数应用类型提示。

1
2
>def square(number: Union[int, float]) -> Union[int, float]:
return number ** 2

类型提示现在可以使用更简洁的写法:

1
2
>def square(number: int | float) -> int | float:
return number ** 2

这个新增语法也被接受作为 isinstance () 和 issubclass () 的第二个参数:

1
2
>>>> isinstance(1, int | str)
>True

要解决该问题,需要把上述使用新特性的代码定义修改成 3.9 支持的 Union 形式,根据提示修改 modules/anisotropic.py

首先在首行添加

from typing import Union

分别将第 10 行的

def _compute_zero_padding(kernel_size: tuple[int, int] | int) -> tuple[int, int]:

替换为

def _compute_zero_padding(kernel_size: Union[tuple[int, int] , int]) -> tuple[int, int]:

将第 15 行的 def _unpack_2d_ks(kernel_size: tuple[int, int] | int) -> tuple[int, int]:

替换为

def _unpack_2d_ks(kernel_size: Union[tuple[int, int], int]) -> tuple[int, int]:

后续类似的修改还有很多,经过手动修改后的 anisotropic.py 文件可以在这里下载

  1. http 端口被重定向

    解决方案为自己搭建转发服务器,使得对于 GET /aiforward690938736925474816/ 的请求能够被正确的转发到本机端口

    需要手动安装 flask, 在能访问网络的 python3 环境中

    1
    pip download flask -d ./pyecharts_packages

    上传到容器环境中,在容器环境内执行

    1
    pip install --no-index --find-links=./flask_packages flask

image-20230920144305765

详细代码位于 https://github.com/Cerber2ol8/Fooocus_on_sc

但是只能保证部分资源被正确加载,在 js 和 css 中的请求就无能为力了

image-20230920144644768

更令人绝望的消息,UI 中提交所用的 websocket 无法使用,容器只提供了 http 协议的端口开放,希望以后有人能成功做出来吧~