We propose a novel training-free image generation algorithm that precisely controls the occlusion relationships between objects in an image. Existing image generation methods typically rely on prompts to influence occlusion, which often lack precision. While layout-to-image methods provide control over object locations, they fail to address occlusion relationships explicitly. Given a pre-trained image diffusion model, our method leverages volume rendering principles to "render" the scene in latent space, guided by occlusion relationships and the estimated transmittance of objects. This approach does not require retraining or finetuning the image diffusion model, yet it enables accurate occlusion control due to its physics-grounded foundation. In extensive experiments, our method significantly outperforms existing approaches in terms of occlusion accuracy. Furthermore, we demonstrate that by adjusting the opacities of objects or concepts during rendering, our method can achieve a variety of effects, such as altering the transparency of objects, the density of mass (e.g., forests), the concentration of particles (e.g., rain, fog), the intensity of light, and the strength of lens effects, etc.
Figure 2. LaRender replaces cross-attention with Latent Rendering layers. Using an occlusion graph, objects are sorted back-to-front. Each cross-attention layer extracts object-wise latent features, from which transmittance maps are estimated. An orthographic camera captures these features, and Latent Rendering generates a scene that preserves physical occlusion.
Click the buttons to change the occlusion order.
Move the sliders to control the opacities.
@inproceedings{zhan2025larender,
title={LaRender: Training-Free Occlusion Control in Image Generation via Latent Rendering},
author={Zhan, Xiaohang and Liu, Dingming},
booktitle={International Conference on Computer Vision},
year={2025},
}