Train your own FLUX LoRA model (Windows/Linux)

train flux lora locally

Flux is one of the most powerful models we experienced with. But, the problem is that it has so much refined that you can't get the generation with realism. But, this can be solved by fine-tuning the Flux model with LoRA. To know more about Flux basics, follow our tutorial to install Flux Dev or Flux Schnell 

LoRA can be used to train models in any style you want like Realism, Anime, 3d Art, etc that we discussed in our in-depth tutorial on LoRA model training.


You can start your LoRA training on NVIDIA GPUs installed on local or cloud based servers.  Basically there are two methods: 

(a) Method 1: Using AI-Toolkit WebUI with KohyaSS as backend (For 12GB/16GB/20GB VRAMs)

(b) Method 2: Using AI-Toolkit in Command line (Supported for 24GB VRAM and higher)


Table of Contents:

Method 1 : Using AI toolkit and KohyaSS

This method supports to train Flux Dev model which is registered under non-commercial license. This means the new LoRA model will be under this license only.

Installation:

1. First you need to have Python and Git installed on your machine.

open command prompt using cmd


clone flux gym repository

2. Open your command prompt using "cmd" on any folder path location. Clone the Fluxgym repository with kohya-ss/sd-scripts using flowing git command:

git clone https://github.com/cocktailpeanut/fluxgym.git

Switch to directory fluxgym using command:

cd fluxgym


install kohyass

3. Then install KohyaSS using command provided below:

git clone -b sd3 https://github.com/kohya-ss/sd-scripts


4. Now create and activate a virtual environment from the root fluxgym folder:


create and activate virtual environement

(a) Windows users should use these:

Create virtual environment where "env" is the environment folder name:

python -m venv env

Activate environment:

env\Scripts\activate

(b) Linux users should use these:

python -m venv env

source env/bin/activate

5. Next, is to move to "sd-scripts" folder and install required libraries:
cd sd-scripts
pip install -r requirements.txt


6. Now come back to the root folder and install the dependencies:

cd ..
pip install -r requirements.txt


7. At last, install the pytorch Nightly library:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121


folder structure of flux gym


Finally you will get the folder structure like this as we shown in above image. After installation, just close command prompt.


Downloading the models

save the models in respective folder

1. Now its time to download these relevant text encoder that will help to train your Flux LoRA.



After downloading just put them inside "models/clip" folder.


2. Download VAE(Variational Auto Encoder) and save it inside "models/vae" folder:

3. Download the Flux Dev model from cocktailpeanut Hugging face repository and put it inside "models/unet" folder.


Training Workflow:

1. Move to the "fluxgym" root folder, Open command prompt by typing "cmd" on folder address location.

2. Activate the virtual environment 

Windows users should use :
env\Scripts\activate

Linux users should use :
source env/bin/activate


3.Start LoRA training by executing app file:

For windows users:
python app.py

For Linux users: 
python3 app.py

A new Gradio based WebUI will trigger. Here, just upload your relevant image and captioning and initiate the training process by hitting on "Start" button.  

flux Lora training workflow with KohyaSS




Method 2: Using AI tookit in Command line

This method supports the Flux Dev (registered under a non-commercial license) and the Flux Schnell model (registered under Apache 2.0). So, consider that if you use this to train, the same license will reflect to the new one.

Installation:

0. Install Git from the official page and you should have Python greater than 3.10. 

1. Create a new directory anywhere with relative meaning. We created it as "Flux_training". You can choose whatever you want.

2. Inside your new folder, Type "cmd" by moving to the address bar of the folder location to open a command prompt.

3. Into the command prompt, copy and paste the commands one by one to install the dependent Python libraries:

(a)For Windows users:

This is used to clone the repository:

git clone https://github.com/ostris/ai-toolkit.git

Move into the ai-toolkit directory:

cd ai-toolkit

Updating submodule:

git submodule update --init --recursive

Creating a virtual environment:

python -m venv venv

Activating virtual environment:

.\venv\Scripts\activate

Installing torch:

pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121

Installing required dependencies:

pip install -r requirements.txt

Now, close the command prompt.


(b)For Linux Users:

git clone https://github.com/ostris/ai-toolkit.git

cd ai-toolkit

git submodule update --init --recursive

python3 -m venv venv

source venv/bin/activate

pip3 install torch

pip3 install -r requirements.txt


Setup API for Flux Dev:

1. First you need to accept the terms and agreements on Hugging Face, otherwise, it will not allow you to set up. 

To do this, login to Hugging Face and accept the terms and conditions of the Flux Dev version.

API setup is not required if you are using Flux Schnell model.


grated permission on hugging face

After accepting you will see this message as shown above.

2. Then create your API key from Hugging Face


cerate and paste api key

3. Move into the "ai-toolkit" folder. Create a file named ".env" as an extension file using any editor. Copy and Paste your API key from the Hugging Face dashboard to the .env file like this "HF_TOKEN=your_key_here" as illustrated in the above image. 

The API key shown above is just for illustration purposes. You need to add yours.


Setup for Flux Schnell:

1. To train LoRA for Schnell, you need a training adapter available in Hugging Face that automatically downloaded.

2. Add these settings to your inside "modal_train_lora_flux_schnell_24gb.yaml" file that can be found in "config/examples/modal" folder.

You should not use these settings if already presents in the respective file. 

      model:

        name_or_path: "black-forest-labs/FLUX.1-schnell"

        assistant_lora_path: "ostris/FLUX.1-schnell-training-adapter"

        is_flux: true

        quantize: true


You also need to add these basic parameters:

      sample:

        guidance_scale: 1  

        sample_steps: 4 



Training Workflow:

1. Prepare and store your dataset as the reference style of images with relative text in any new folder. You should use only the JPEG/PNG/JPG image format only. Minimum 10-15 will work best.

We want to generate art with realism, so we saved realistic images inside "D:/downloads/images" (you can choose any location). You do not need to resize the images. All the images will be automatically handled by the loader.

2. Now, create the text file as well mentioning details about the image. This will influence the model you train, so be creative and descriptive. Save them in the same folder.

captioning images and text file

For instance: If the image is  "image1.png" then the text file should be "image1.txt".

You can also add the [trigger] word in this form into a text file. For instance: "[trigger] holding a sign that says 'I LOVE PROMPTS'". 

3. Now navigate to the "config/examples" folder, for Flux Dev use "train_lora_flux_24gb.yaml" file, and for Flux Schnell use "train_lora_flux_schnell_24gb.yaml" file. Then, copy this file using the right-click, switch back to the config folder, and paste it there. 

Then rename it to whatever relative name. We renamed it to "train_Flux_dev-Lora.yaml".

4. Now you need to add the path location of your image data set folder.

(a) Windows users should edit the newly renamed yaml file. You can use any editor like Sublime Text or Notepad++, but we are using VS Code. All the details are available in the same file.


windows users edit yml file like this

After opening the file, just copy the path of your image dataset folder by right-clicking, selecting the copy path or properties option, and paste it as it is like we have shown above. 

This is our folder path location, yours will be different. Edit like we mentioned in the above-illustrated image and save the file.

(b) Linux users should add the path location like as usual in the yaml file.

5. Again move to the "ai-toolkit" root folder. Open the command prompt by typing "cmd" on the folder path location.

Activate virtual environment:

.\venv\Scripts\activate

6. Finally, to start training, move back to the command prompt and execute the file using the command, here "train_Flux_dev-Lora.yaml" is our file name, in your case, it's different(whatever you choose).

For Windows users(replace the <<your-file-name>> with your filename):

python run.py config/<<your-file-name>>

In our case this is the command:

python run.py config/train_Flux_dev-Lora.yaml

For Linux users(replace the <<your-file-name>> with your filename):
python3 run.py config/<<your-file-name>>

After training, the model gets saved into the "output" folder with the default name "my_first_flux_lora_v1".

We are using NVIDIA RTX4090 24GBVRAM. Our training time took 4 hours to train our Flux LoRA fine-tuned model.


Some image generation with Flux LoRA:


flux lora output

Prompt used: a women sitting inside restaurant, hyper realistic , professional photoshoot, 16k


flux lora output

Prompt used: a tiny astronaut hatching from the egg on moon, realistic, 16k


flux lora output

Prompt used: a lava girl hatching from the egg in a dark forest, realistic, 8k


Important points:

1. Beware, the training takes a massive amount of computational power and time as it depends on your system configurations. 

2. In case if you want to stop the training in between, use the "Ctrl+C" key. To, get the real-time status you can periodically check the command prompt. 

3. You can resume anytime in the future by moving into the "ai-toolkit" folder and open the command prompt by typing "cmd" on the folder path location. Then type these commands:

For Windows users:

.\venv\Scripts\activate

python run.py config/<<your-file-name>>

For Linux users:

source venv/bin/activate

python3 run.py config/<<your-file-name>>

Then execute the same command as described above and this will start from where you left off. But, don't do it while the models are getting saved otherwise your output file will corrupt.