Flux- IP Adapter, ControlNet, LoRA (Installation)

 

install ip adpater, controlnet, lora for flux

We have listed all the Flux based workflows(IP Adpater, ControlNets, LoRAs) at one place so that you don't need to jump to multiple articles.

As instructed by Xlabs, you need to use the Flux Dev official model released by Black Forest Labs that uses the Unet loader. This helps to handle large Flux models (FP16 variant). 

They provided other option to work with lower end GPUs. You can also use Flux GGUF variant to manage 12GB VRAM memory usage. To use the Flux GGUF model variant, into the workflow you need to replace the "Load Diffusion Model" node with "UNET loader(GGUF)" node. You can find the "UNET Loader" from bootleg category.

We already covered all the Flux variants in our previous tutorials. To get into the workflows (provided below), first of all you need to install ComfyUI on your machine and get the Flux model installation

Table of Contents:


Initial Setup

To setup IP Adpater, ControlNet or LoRA for Flux, you need to clone the Xlabs repository. Move to "ComfyUI/custom_nodes" folder and navigate to folder address location and type "cmd" to open command prompt. The do the repository cloning, use the cloning command:

git clone https://github.com/XLabs-AI/x-flux-comfyui.git


Flux IP Adapter

The IP-Adapter for Flux Dev released by X-Labs. The  model has been trained on 512 by 512 for 50k steps that means we have the flexibility to use  512 by 512 and 1024 by 1024 resolution for image generation.

Installation:

1. Now, you need to update custom nodes. Move into "ComfyUI/custom_nodes/x-flux-comfyui" folder. Double click to open "setup.py" file to start the update process. Restart ComfyUI. After this, you have to download models.


download IP Adapter model

2. Download the IP Adapter model from Hugging Face, and save it inside "ComfyUI/models/xlabs/ipadapters" folder.


Download Clip L model

3. Next, you have to download Clip-L model(model.safetensors) from OpenAI's HuggingFace, and save this to "ComfyUI/models/clip_vision" folder. Rename it to anything relevant to get easily identified. 

4. Workflows can be found inside your "ComfyUI/custom_nodes/x-flux-comfyui/workflows" folder or it can be downloaded from Xlab's Github repository.

5. Now to work with the workflow, use these nodes - Flux Load IPAdapter and Apply Flux IPAdapter nodes (strength shouldn't be greater than 1), then select the relevant CLIP model from Dual Clip loader. 

Set the recommended resolution as 512x512 or 1024x1024. At last click "Queue" to get start the rendering.

Well, we tested and the results were not that satisfying. Actually as officially reported this is in beta stage. So, you need to try multiple times with tweaked settings to get perfect results.


Flux ControlNet

The ControlNET for Flux Dev released by X-Labs. Currently 3 models are supported Canny, HED, Depth (Midas). Another one i.e. ControlNet Union provided by InstantX.

Installation:

Update ControlNet

1. Now, you need to update custom nodes. From ComfyUI Manager select "Custom nodes manager" search for "controlnet auxilary preprocessor" and hit update button.

Alternative:

Move into "ComfyUI/custom_nodes/x-flux-comfyui" folder. Double click to open "setup.py" file to start the update process. Restart ComfyUI. After this, you have to download models.


Download ControlNet models

2. Next , you need to download all the ControlNet models to enjoy the different art styles:-

(a) Released by Xlabs:


(b) Released by InstantX:


Save all the models inside "ComfyUI/models/xlabs/controlnets" folder.


Workflow for Controlnet

3. Get the respective ControlNet workflow from your "ComfyUI/custom_nodes/x-flux-comfyui/workflows" folder or it can be downloaded from Hugging Face.

Here, ControlNetV1 used for 512x512 resolution. For 768 and 1024 dimensions use other ControlNets(V2/V3). CFG and Steps should be as usual for Flux Dev model. ControlNet Union included with multiple Controlnet models that is in beta stage. 


Flux LoRA

Xlabs trained multiple LoRAs like realism, anime, disney, art style, MidjouneyV6 on Flux Dev model. You can get the overall information about the Xlabs trained Flux LoRA models from their Github repository. 

By following that you can train your LoRA for Flux models as well. Alternatively, you can checkout our tutorial for Flux LoRA training to get the overall understanding.

Installation:

Download Lora models

1. Download Flux LoRA models from Xlabs's Hugging Face LoRA section. There are multiple types of LoRAs trained and listed by Xlabs. Just choose any of them or you can download all.

2. After downloading, save the models inside the "ComfyUI/models/xlabs/loras" folder.

3. Now, just restart and refresh ComfyUI to take effect.

workflow for Lora

4. Workflows for Flux LoRA can be found inside your "ComfyUI/custom_nodes/x-flux-comfyui/workflows" folder or you can download from Xlab's Github repository.