1 |
Natural Language Descriptions of Deep Visual Features ...
|
|
|
|
Abstract:
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual-information-guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with ... : To be published as a conference paper at ICLR 2022 ...
|
|
Keyword:
Artificial Intelligence cs.AI; Computation and Language cs.CL; Computer Vision and Pattern Recognition cs.CV; FOS Computer and information sciences; Machine Learning cs.LG
|
|
URL: https://arxiv.org/abs/2201.11114 https://dx.doi.org/10.48550/arxiv.2201.11114
|
|
BASE
|
|
Hide details
|
|
3 |
Language as a bootstrap for compositional visual reasoning
|
|
|
|
In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol 43, iss 43 (2021)
|
|
BASE
|
|
Show details
|
|
5 |
Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Implicit Representations of Meaning in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Implicit Representations of Meaning in Neural Language Models ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
How Do Neural Sequence Models Generalize? Local and Global Cues for Out-of-Distribution Prediction ...
|
|
|
|
BASE
|
|
Show details
|
|
11 |
What Context Features Can Transformer Language Models Use? ...
|
|
|
|
BASE
|
|
Show details
|
|
12 |
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks ...
|
|
|
|
BASE
|
|
Show details
|
|
13 |
One-Shot Lexicon Learning for Low-Resource Machine Translation ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
What Context Features Can Transformer Language Models Use? ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
The Low-Dimensional Linear Geometry of Contextualized Word Representations ...
|
|
|
|
BASE
|
|
Show details
|
|
20 |
A Benchmark for Systematic Generalization in Grounded Language Understanding ...
|
|
|
|
BASE
|
|
Show details
|
|
|
|