Comfyui user manual example

Comfyui user manual example. noise2 = noise2 self . (the cfg set in the sampler). This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Flux Examples. mp4. 75 and the last frame 2. In the example below we use a different VAE to encode an image to latent space, and decode the result of the Ksampler. Hunyuan DiT is a diffusion model that understands both english and chinese. You can use it to connect up models, prompts, and other nodes to create your own unique workflow. Flux. /scripts/app. 0 + other_model If you are familiar with the "Add Difference" option in other UIs this is how to do it in ComfyUI. ComfyUI manual; Core Nodes; Interface; Examples. Example. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. The requirements are the CosXL base model, the SDXL base model and the SDXL model you want to convert. A growing collection of fragments of example code… Comfy UI preference settings. This image contain 4 different areas: night, evening, day, morning. Advanced Merging CosXL. This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. weight2 = weight2 @property def seed ( self ) : return ComfyUI Provides a variety of ways to finetune your prompts to better reflect your intention. Here is an example of how the esrgan upscaler can be used for the upscaling step. js"; /* In setup(), add the setting */ . ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. 34. For example: 896x1152 or 1536x640 are good resolutions. Install. For example, you might ask: "{eye color} eyes, {hair style} {hair color} hair, {ethnicity} {gender}, {age number} years old" The AI looks at the picture and might say: "Brown eyes, curly black hair, Asian female, 25 years Lora Examples. Example detection using the blazeface_back_camera: AnimateDiff_00004. 馃寪 To get started with ComfyUI, visit the GitHub page and download the latest release. safetensors. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. Here is an example of how to use upscale models like ESRGAN. In this example I used albedobase-xl. These are examples demonstrating how to use Loras. Download hunyuan_dit_1. It is now supported on ComfyUI. This section is a guide to the ComfyUI user interface, including basic operations, menu settings, node operations, and other common user interface options. You signed in with another tab or window. Rename this file to extra_model_paths. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. Img2Img works by loading an image like this example image open in new window, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. This could be used to create slight noise variations by varying weight2 . 馃捑 The installation process for ComfyUI is straightforward and does not require extensive technical knowledge. ComfyUI stands as an advanced, modular GUI engineered for stable diffusion, characterized by its intuitive graph/nodes interface. This image has had part of it erased to alpha with gimp, the alpha channel is what we will be using as a mask for the inpainting. The first step in using the ComfyUI Consistent Character workflow is to select the perfect input image. This guide is designed to help you quickly get started with ComfyUI, run your first image generation, and explore advanced features. This image should embody the essence of your character and serve as the foundation for the entire You signed in with another tab or window. The InstantX team released a few ControlNets for SD3 and they are supported in ComfyUI. 1 with ComfyUI Jul 6, 2024 路 The best way to learn ComfyUI is by going through examples. safetensors and put it in your ComfyUI/checkpoints directory. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow Aug 14, 2024 路 馃 ComfyUI is recommended for an easy local installation of AI models, as it simplifies the process. Dive into the basics of ComfyUI, a powerful tool for AI-based image generation. . The proper way to use it is with the new SDTurbo Hunyuan DiT Examples. Note that we use a denoise value of less than 1. The importance of parts of the prompt can be up or down-weighted by enclosing the specified part of the prompt in brackets using the following syntax: (prompt:weight). Restarting your ComfyUI instance on ThinkDiffusion. yaml and edit it with your favorite text editor. The image below is a screenshot of the ComfyUI interface. Recommended Workflows. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. SDXL Turbo is a SDXL model that can generate consistent images in a single step. These are examples demonstrating how to do img2img. Area Composition Examples. These are examples demonstrating the ConditioningSetArea node. ComfyUI Examples; 2 Pass Txt2Img (Hires fix) Examples; 3D Examples; Area Composition Examples; ControlNet and T2I-Adapter Examples; Frequently Asked Questions; GLIGEN Examples. Download it and place it in your input folder. The ComfyUI interface includes: The main operation interface; Workflow node In this tutorial, we will guide you through the steps of using the ComfyUI Consistent Character workflow effectively. What is ComfyUI. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. Efficient Loader node in ComfyUI KSampler(Efficient) node in ComfyUI. Easy starting workflow. Additional discussion and help can be found here . A reminder that you can right click images in the LoadImage node and edit them with the mask editor. Interface Description. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. All the images in this page contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. You can try them out with this example workflow. Mar 21, 2024 路 Expanding the borders of an image within ComfyUI is straightforward, and you have a couple of options available: basic outpainting through native nodes or with the experimental ComfyUI-LaMA-Preprocessor custom node. Contribute to AIFSH/ComfyUI-MimicMotion development by creating an account on GitHub. We will go through some basic workflow examples. Flux is a family of diffusion models by black forest labs. Join the largest ComfyUI community. You switched accounts on another tab or window. Put the GLIGEN model files in the ComfyUI/models/gligen directory. GLIGEN Examples; Hypernetwork Examples; Img2Img Examples; Inpaint Examples; LCM Examples; Lora Examples; Model Merging . At times you might wish to use a different VAE than the one that came loaded with the Load Checkpoint node. ComfyUI Examples. Sep 7, 2024 路 Inpaint Examples. These versatile workflow templates have been designed to cater to a diverse range of projects, making them compatible with any SD1. Here is a link to download pruned versions of the supported GLIGEN model files (opens in a new tab). 1; Flux Hardware Requirements; How to install and use Flux. SD3 Controlnets by InstantX are also supported. Dec 19, 2023 路 In the standalone windows build you can find this file in the ComfyUI directory. Annotated Examples. If you want do do merges in 32 bit float launch ComfyUI with: –force-fp32. noise1 = noise1 self . Add and read a setting. So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Note that this example uses the DiffControlNetLoader node because the controlnet used is a diff Feb 24, 2024 路 ComfyUI is a node-based interface to use Stable Diffusion which was created by comfyanonymous in 2023. Why ComfyUI? TODO. /. You can Load these images in ComfyUI to get the full workflow. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. 1. 1. The denoise controls the amount of noise added to the image. Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page. In this post we'll show you some example workflows you can import and get started straight away. Unlike other Stable Diffusion tools that have basic text fields where you enter values and information for generating an image, a node-based interface is different in the sense that you’d have to create nodes to build a workflow to generate images. This repo contains examples of what is achievable with ComfyUI. Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. By facilitating the design and execution of sophisticated stable diffusion pipelines, it presents users with a flowchart-centric approach. In the above example the first frame will be cfg 1. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Here is an example of how to create a CosXL model from a regular SDXL model with merging. Note that in ComfyUI txt2img and img2img are the same node. Examples of what is achievable with ComfyUI. It covers the following topics: Introduction to Flux. Share, discover, & run thousands of ComfyUI workflows. This guide demystifies the process of setting up and using ComfyUI, making it an essential read for anyone looking to harness the power of AI for image generation. Sep 7, 2024 路 Hypernetwork Examples. Dec 10, 2023 路 ComfyUI should be capable of autonomously downloading other controlnet-related models. The resulting SDXL Examples. Upscale Model Examples. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. safetensors, stable_cascade_inpainting. Search the Efficient Loader and KSampler (Efficient) node in the list and add it to the empty workflow. You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. After studying some essential ones, you will start to understand how to make your own. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. ai in collaboration with Simo released an open source MMDiT text to image model yesterday called AuraFlow. A ComfyUI StableZero123 Custom Node Use playground-v2 model with ComfyUI Generative AI for Krita – using LCM on ComfyUI Basic auto face detection and refine example Enabling face fusion and style migration These are examples demonstrating how to do img2img. SD3 ControlNet. One of the best parts about ComfyUI is how easy it is to download and swap between workflows. The following images can be loaded in ComfyUI to get the full workflow. You can use more steps to increase the quality. Here is an example: You can load this image in ComfyUI to get the workflow. AuraFlow. 5. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. Learn about node connections, basic operations, and handy shortcuts. Some custom_nodes do still Here’s an example of creating a noise object which mixes the noise from two sources. Sep 7, 2024 路 GLIGEN Examples. Issue & PR a comfyui custom node for MimicMotion. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. test on 2080ti 11GB torch==2 Sep 7, 2024 路 Lora Examples. up and down weighting. Sep 7, 2024 路 Img2Img Examples. 1 ComfyUI install guidance, workflow and example. In this example we will be using this image. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. Windows. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: example. Advanced ComfyUI Template For Commercial: 2: ComfyUI-Template-Pack: 10 ComfyUI Templates for Beginner: 3: ComfyUI-101Days: My Daily ComfyUI Workflow Creation: 4 You can also subtract models weights and add them like in this example used to create an inpaint model from a non inpaint model with the formula: (inpaint_model - base_model) * 1. You can Load these images in ComfyUI open in new window to get the full workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI (opens in a new tab). 4 days ago 路 Here's the cool part: you don't have to ask each question separately. Save this image then load it or drag it on ComfyUI to get the workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI open in new window. All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. This is what the workflow looks like in ComfyUI: ComfyUI User Interface. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. Sep 7, 2024 路 SDXL Examples. Hypernetworks are patches applied on the main MODEL so to use them put them in the models/hypernetworks directory and use the Hypernetwork Loader node like this: An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Hunyuan DiT 1. You signed out in another tab or window. This way frames further away from the init frame get a gradually higher cfg. The example below shows how to use the KSampler in an image to image task, by connecting a model, a positive and negative embedding, and a latent image. The KSampler is the core of any workflow and can be used to perform text to image and image to image generation tasks. Simply download, extract with 7-Zip and run. Then press “Queue Prompt” once and start writing your prompt. 5 checkpoint model. This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. 1; Overview of different versions of Flux. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. All LoRA flavours: Lycoris, loha, lokr, locon, etc are used this way. Contribute to kijai/ComfyUI-LivePortraitKJ development by creating an account on GitHub. You can then load up the following image in ComfyUI to get the workflow: A ComfyUI guide ComfyUI is a simple yet powerful Stable Diffusion UI with a graph and nodes interface. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version Follow the ComfyUI manual installation instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed. fal. Since ESRGAN Jul 13, 2024 路 Here is an example workflow. Direct link to download. ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. Upload Input Image. On a machine equipped with a 3070ti, the generation should be completed in about 3 minutes. example. The initial set includes three templates: Simple Template; Intermediate For more details, you could follow ComfyUI repo. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. class Noise_MixedNoise : def __init__ ( self , nosie1 , noise2 , weight2 ) : self . I then recommend enabling Extra Options -> Auto Queue in the interface. Here's a list of example workflows in the official ComfyUI repo. Workflows presented in this article are available to download from the Prompting Pixels site or in the sidebar. Reload to refresh your session. Img2Img Examples. The image below is the empty workflow with Efficient Loader and KSampler (Efficient) added and connected to each other Aug 1, 2024 路 For use cases please check out Example Workflows. 0. This guide is about how to setup ComfyUI on your Windows computer to run Flux. import { app } from ". 0 (the min_cfg in the node) the middle frame 1. 2. Welcome to the comprehensive, community-maintained documentation for ComfyUI, the cutting-edge, modular Stable Diffusion GUI and backend. You set up a template, and the AI fills in the blanks. For some workflow examples and see what ComfyUI can do you can check out: The UI now will support adding models and any missing node pip installs. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. fnivo exyv euj hzyi nxlh vvztgic groqgy eliijm bursmx vlcenvp