Springe direkt zu Inhalt

Adrien Doerig, Universität Osnabrück

Jul 15, 2024 | 04:00 PM

Title: Visual representations in the human brain are aligned with large language models

Abstract: The human brain extracts complex information from visual inputs, including information about objects, their spatial and semantic interrelations, and their interactions with the environment. However, a quantitative approach for studying this information remains elusive. An intriguing recent finding in artificial intelligence is that linguistic representations improve visual models, suggesting a connection between linguistic and visual representations. Here, we demonstrate a similar link in the brain. Textual captions of natural scenes, when processed by large language models (LLMs), yield representations that successfully characterise brain activity evoked by viewing the natural scenes. This mapping captures selectivities of different brain areas, and is sufficiently robust that accurate scene captions can be reconstructed from brain activity. Using carefully controlled model comparisons, we then proceed to show that the accuracy with which LLM representations match brain representations derives from the ability of LLMs to integrate complex information contained in scene captions beyond that conveyed by individual words. Finally, we train deep neural network models to transform raw image inputs into LLM representations. Remarkably, these networks learn representations that are better aligned with brain representations than a large number of state-of-the-art alternative models, despite being trained on orders-of-magnitude less data. Overall, our results suggest that LLM embeddings of scene captions provide a representational format that accounts for the complex information extracted by the brain from visual inputs.