import argparse import datetime import logging import sys import shlex import cmd from pathlib import Path import torch import json import traceback from PIL import Image from slugify import slugify from diffusers import ( AutoencoderKL, UNet2DConditionModel, PNDMScheduler, DPMSolverMultistepScheduler, DPMSolverSinglestepScheduler, DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, KDPM2DiscreteScheduler, KDPM2AncestralDiscreteScheduler ) from transformers import CLIPTextModel from data.keywords import prompt_to_keywords, keywords_to_prompt from models.clip.embeddings import patch_managed_embeddings from models.clip.tokenizer import MultiCLIPTokenizer from pipelines.stable_diffusion.vlpn_stable_diffusion import VlpnStableDiffusion from util import load_config, load_embeddings_from_dir torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True default_args = { "model": "stabilityai/stable-diffusion-2-1", "precision": "fp32", "ti_embeddings_dir": "embeddings", "output_dir": "output/inference", "config": None, } default_cmds = { "project": "", "scheduler": "dpmsm", "template": "{}", "prompt": None, "negative_prompt": None, "shuffle": False, "image": None, "image_noise": .7, "width": 768, "height": 768, "batch_size": 1, "batch_num": 1, "steps": 30, "guidance_scale": 7.0, "seed": None, "config": None, } def merge_dicts(d1, *args): d1 = d1.copy() for d in args: d1.update({k: v for (k, v) in d.items() if v is not None}) return d1 def create_args_parser(): parser = argparse.ArgumentParser( description="Simple example of a training script." ) parser.add_argument( "--model", type=str, ) parser.add_argument( "--precision", type=str, choices=["fp32", "fp16", "bf16"], ) parser.add_argument( "--ti_embeddings_dir", type=str, ) parser.add_argument( "--output_dir", type=str, ) parser.add_argument( "--config", type=str, ) return parser def create_cmd_parser(): parser = argparse.ArgumentParser( description="Simple example of a training script." ) parser.add_argument( "--project", type=str, default=None, help="The name of the current project.", ) parser.add_argument( "--scheduler", type=str, choices=["plms", "ddim", "klms", "dpmsm", "dpmss", "euler_a", "kdpm2", "kdpm2_a"], ) parser.add_argument( "--template", type=str, ) parser.add_argument( "--prompt", type=str, nargs="+", ) parser.add_argument( "--negative_prompt", type=str, nargs="*", ) parser.add_argument( "--shuffle", type=bool, ) parser.add_argument( "--image", type=str, ) parser.add_argument( "--image_noise", type=float, ) parser.add_argument( "--width", type=int, ) parser.add_argument( "--height", type=int, ) parser.add_argument( "--batch_size", type=int, ) parser.add_argument( "--batch_num", type=int, ) parser.add_argument( "--steps", type=int, ) parser.add_argument( "--guidance_scale", type=float, ) parser.add_argument( "--seed", type=int, ) parser.add_argument( "--config", type=str, ) return parser def run_parser(parser, defaults, input=None): args = parser.parse_known_args(input)[0] conf_args = argparse.Namespace() if args.config is not None: conf_args = load_config(args.config) conf_args = parser.parse_known_args(namespace=argparse.Namespace(**conf_args))[0] res = defaults.copy() for dict in [vars(conf_args), vars(args)]: res.update({k: v for (k, v) in dict.items() if v is not None}) return argparse.Namespace(**res) def save_args(basepath, args, extra={}): info = {"args": vars(args)} info["args"].update(extra) with open(f"{basepath}/args.json", "w") as f: json.dump(info, f, indent=4) def load_embeddings(pipeline, embeddings_dir): added_tokens, added_ids = load_embeddings_from_dir( pipeline.tokenizer, pipeline.text_encoder.text_model.embeddings, Path(embeddings_dir) ) print(f"Added {len(added_tokens)} tokens from embeddings dir: {list(zip(added_tokens, added_ids))}") def create_pipeline(model, dtype): print("Loading Stable Diffusion pipeline...") tokenizer = MultiCLIPTokenizer.from_pretrained(model, subfolder='tokenizer', torch_dtype=dtype) text_encoder = CLIPTextModel.from_pretrained(model, subfolder='text_encoder', torch_dtype=dtype) vae = AutoencoderKL.from_pretrained(model, subfolder='vae', torch_dtype=dtype) unet = UNet2DConditionModel.from_pretrained(model, subfolder='unet', torch_dtype=dtype) scheduler = DDIMScheduler.from_pretrained(model, subfolder='scheduler', torch_dtype=dtype) patch_managed_embeddings(text_encoder) pipeline = VlpnStableDiffusion( text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer, scheduler=scheduler, ) pipeline.enable_xformers_memory_efficient_attention() pipeline.enable_vae_slicing() pipeline.to("cuda") print("Pipeline loaded.") return pipeline @torch.inference_mode() def generate(output_dir: Path, pipeline, args): if isinstance(args.prompt, str): args.prompt = [args.prompt] if args.shuffle: args.prompt *= args.batch_size args.batch_size = 1 args.prompt = [keywords_to_prompt(prompt_to_keywords(prompt), shuffle=True) for prompt in args.prompt] args.prompt = [args.template.format(prompt) for prompt in args.prompt] now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S") image_dir = [] if len(args.prompt) != 1: if len(args.project) != 0: output_dir = output_dir.joinpath(f"{now}_{slugify(args.project)}") else: output_dir = output_dir.joinpath(now) for prompt in args.prompt: dir = output_dir.joinpath(slugify(prompt)[:100]) dir.mkdir(parents=True, exist_ok=True) image_dir.append(dir) with open(dir.joinpath('prompt.txt'), 'w') as f: f.write(prompt) else: output_dir = output_dir.joinpath(f"{now}_{slugify(args.prompt[0])[:100]}") output_dir.mkdir(parents=True, exist_ok=True) image_dir.append(output_dir) args.seed = args.seed or torch.random.seed() save_args(output_dir, args) if args.image: init_image = Image.open(args.image) if not init_image.mode == "RGB": init_image = init_image.convert("RGB") else: init_image = None if args.scheduler == "plms": pipeline.scheduler = PNDMScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "klms": pipeline.scheduler = LMSDiscreteScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "ddim": pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "dpmsm": pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "dpmss": pipeline.scheduler = DPMSolverSinglestepScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "euler_a": pipeline.scheduler = EulerAncestralDiscreteScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "kdpm2": pipeline.scheduler = KDPM2DiscreteScheduler.from_config(pipeline.scheduler.config) elif args.scheduler == "kdpm2_a": pipeline.scheduler = KDPM2AncestralDiscreteScheduler.from_config(pipeline.scheduler.config) for i in range(args.batch_num): pipeline.set_progress_bar_config( desc=f"Batch {i + 1} of {args.batch_num}", dynamic_ncols=True ) seed = args.seed + i generator = torch.Generator(device="cuda").manual_seed(seed) images = pipeline( prompt=args.prompt, negative_prompt=args.negative_prompt, height=args.height, width=args.width, num_images_per_prompt=args.batch_size, num_inference_steps=args.steps, guidance_scale=args.guidance_scale, generator=generator, image=init_image, strength=args.image_noise, ).images for j, image in enumerate(images): dir = image_dir[j % len(args.prompt)] image.save(dir.joinpath(f"{seed}_{j // len(args.prompt)}.png")) image.save(dir.joinpath(f"{seed}_{j // len(args.prompt)}.jpg"), quality=85) if torch.cuda.is_available(): torch.cuda.empty_cache() class CmdParse(cmd.Cmd): prompt = 'dream> ' commands = [] def __init__(self, output_dir, ti_embeddings_dir, pipeline, parser): super().__init__() self.output_dir = output_dir self.ti_embeddings_dir = ti_embeddings_dir self.pipeline = pipeline self.parser = parser def default(self, line): line = line.replace("'", "\\'") try: elements = shlex.split(line) except ValueError as e: print(str(e)) return if elements[0] == 'q': return True if elements[0] == 'reload_embeddings': load_embeddings(self.pipeline, self.ti_embeddings_dir) return try: args = run_parser(self.parser, default_cmds, elements) if len(args.prompt) == 0: print('Try again with a prompt!') return except SystemExit: traceback.print_exc() self.parser.print_help() return except Exception as e: traceback.print_exc() return try: generate(self.output_dir, self.pipeline, args) except KeyboardInterrupt: print('Generation cancelled.') except Exception as e: traceback.print_exc() return def do_exit(self, line): return True def main(): logging.basicConfig(stream=sys.stdout, level=logging.WARN) args_parser = create_args_parser() args = run_parser(args_parser, default_args) output_dir = Path(args.output_dir) dtype = {"fp32": torch.float32, "fp16": torch.float16, "bf16": torch.bfloat16}[args.precision] pipeline = create_pipeline(args.model, dtype) load_embeddings(pipeline, args.ti_embeddings_dir) cmd_parser = create_cmd_parser() cmd_prompt = CmdParse(output_dir, args.ti_embeddings_dir, pipeline, cmd_parser) cmd_prompt.cmdloop() if __name__ == "__main__": main()