Have you ever encountered slow inference speed while handling image or video generation models? The solution is to use TeachCache. Timestep Embedding Aware Cache (TeaCache), a game-changing, training-free caching technique that optimizes performance across timesteps for diffusion models. Whether working with images, videos, or audio, TeaCache significantly accelerates your workflow while maintaining stunning output quality.
TeaCache is now fully integrated into ComfyUI, making it effortlessly compatible with native nodes. Users can incorporate TeaCache into their workflows simply by connecting the designated node to ComfyUI’s native nodes, ensuring an intuitive and streamlined experience.
Most of the project is released under Apache2.0 license. For a more in-depth understanding, people can find access their research paper.
It offers substantial speedups across multiple diffusion models while maintaining high visual fidelity. Depending on the model in use, it delivers up to:
- 1.2x to 1.6x lossless speedup, ensuring zero compromise in output quality.
- 1.7x to 2x accelerated inference, with minimal visual degradation.
Installation
Workflow
- rel_l1_thresh 0.4 speeds it up to 7.6 it/s approx
- rel_l1_thresh 0.23 gives 5.5 it/s approx
![]() |
Normal Image Generation without TeaCache |
![]() |
Image Generation with TeaCache |
You will face some quality loss. So, TeaCache is great for quick previews. You can generate images faster, and then disable or reduce them for full-quality output.
It works well with LoRA, Redux (Flux tools), and official Inpainting (Flux tools). This also supports AMD GPUs via ZLUDA, and at 0.25 threshold, it cuts generation time in half. In general, it delivers a 2x speed boost without significant quality loss in testing.