Create AI Videos with ControlNext and SVD V2

create ai videos with controlnext svd2

The methodology is to use the ControlNext model(released by DV labs research) with SVD V2 (by StabilityAI) to create consistent AI videos.  The actual architecture has just been cloned the way AnymateAnyone works. The model has been trained on better, higher-quality videos with human pose alignment to create more realistic, especially the dancing videos. 




The training and batch frames have been increased to 24 to handle the adaptation of video generation in a generic fashion. Additionally, the height and width have also been increased to a resolution of 576 × 1024 to meet the Stable Video Diffusion benchmarks. You can do in-depth research using relevant research papers.

Now, you can run this model on your machine with ComfyUI using custom nodes. 

Table of Contents:


Installation:

1. First, install ComfyUI and update it by clicking "Update all".

install controlnext-svd model

2. You have to install custom nodes by Kijai. So, navigate to ComfyUI manager and hit "Custom nodes manager". Then, search for "ControlNeXt-svd" by kijai and click the "Install" button to install it. 

3. Then just restart ComfyUI to take effect.


download controlnext-svd model

4. Now download the respective model (controlnext-svd_v2-unet-fp16_converted.safetensors) from Kijai's Hugging Face repository. Save it inside the "ComfyUI/models/unet" folder.


download svd v2 model

5. Next download the SVD XT v1.1 model from Stability Ai's hugging face repository. And just put it inside "ComfyUI/models/checkpoints/svd" folder.

6. Finally, the workflow can be found inside your "ComfyUI/custom_nodes/ComfyUI_ControlNeXt-SVD/example" folder. 

Here, you will get two workflows. The first one is for ComfyUI and the other is for diffusers. Just drag and drop in ComfyUI.