Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines

Technion - Israel Institute of Technology
ACL 2024
Astronaut dog

Diffusion Lens with Stable Diffusion 3: Images generated from different intermediate representations of the text encoder using our method. Prompt: "An image of a dog floating in space holding a green bone."

Abstract

Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process. However, the process by which the encoder produces the text representation is unknown. We propose the Diffusion Lens, a method for analyzing the text encoder of T2I models by generating images from its intermediate representations. Using the Diffusion Lens, we perform an extensive analysis of two recent T2I models. Exploring compound prompts, we find that complex scenes describing multiple objects are composed progressively and more slowly compared to simple scenes; Exploring knowledge retrieval, we find that representation of uncommon concepts requires further computation compared to common concepts, and that knowledge retrieval is gradual across layers. Overall, our findings provide valuable insights into the text encoder component in T2I pipelines.

Diffusion Lens

Diffusion Lens

Visualization of the text encoder's intermediate representations using the Diffusion Lens. At each layer of the text encoder, the Diffusion Lens takes the full hidden state, passes it through the final layer norm, and feeds it into the diffusion model.

Video Presentation

Insights

Complex prompts emerge later than simpler prompts

Simple

Common concepts emerge earlier

Examples

Text encoders differ in computation process

Examples

All Layers Visualization

All Layers

Poster

Related Work

Tang et al. (2023) What the DAAM: Interpreting Stable Diffusion Using Cross Attention. (ACL 2023) [Paper]

Chefer et al. (2023) The Hidden Language of Diffusion Models. (ICLR 2024) [Paper]

Nostalgebraist (2020) interpreting GPT: the logit lens. (LESSWRONG Blog) [Blog]

BibTeX


        @inproceedings{toker-etal-2024-diffusion,
          title = "Diffusion Lens: Interpreting Text Encoders in Text-to-Image Pipelines",
          author = "Toker, Michael  and
            Orgad, Hadas  and
            Ventura, Mor  and
            Arad, Dana  and
            Belinkov, Yonatan",
          editor = "Ku, Lun-Wei  and
            Martins, Andre  and
            Srikumar, Vivek",
          booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
          month = aug,
          year = "2024",
          address = "Bangkok, Thailand",
          publisher = "Association for Computational Linguistics",
          url = "https://aclanthology.org/2024.acl-long.524",
          doi = "10.18653/v1/2024.acl-long.524",
          pages = "9713--9728",
          abstract = "Text-to-image diffusion models (T2I) use a latent representation of a text prompt to guide the image generation process. However, the process by which the encoder produces the text representation is unknown. We propose the Diffusion Lens, a method for analyzing the text encoder of T2I models by generating images from its intermediate representations. Using the Diffusion Lens, we perform an extensive analysis of two recent T2I models. Exploring compound prompts, we find that complex scenes describing multiple objects are composed progressively and more slowly compared to simple scenes; Exploring knowledge retrieval, we find that representation of uncommon concepts require further computation compared to common concepts, and that knowledge retrieval is gradual across layers. Overall, our findings provide valuable insights into the text encoder component in T2I pipelines.",
      }