1 |
Rapid Ocular Responses Are Modulated by Bottom-up-Driven Auditory Salience
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments. ...
|
|
|
|
BASE
|
|
Show details
|
|
3 |
Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments.
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Perception of Filtered Speech by Children with Developmental Dyslexia and Children with Specific Language Impairments
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Multi-time resolution analysis of speech: evidence from psychophysics
|
|
|
|
Abstract:
How speech signals are analyzed and represented remains a foundational challenge both for cognitive science and neuroscience. A growing body of research, employing various behavioral and neurobiological experimental techniques, now points to the perceptual relevance of both phoneme-sized (10–40 Hz modulation frequency) and syllable-sized (2–10 Hz modulation frequency) units in speech processing. However, it is not clear how information associated with such different time scales interacts in a manner relevant for speech perception. We report behavioral experiments on speech intelligibility employing a stimulus that allows us to investigate how distinct temporal modulations in speech are treated separately and whether they are combined. We created sentences in which the slow (~4 Hz; Slow) and rapid (~33 Hz; Shigh) modulations—corresponding to ~250 and ~30 ms, the average duration of syllables and certain phonetic properties, respectively—were selectively extracted. Although Slow and Shigh have low intelligibility when presented separately, dichotic presentation of Shigh with Slow results in supra-additive performance, suggesting a synergistic relationship between low- and high-modulation frequencies. A second experiment desynchronized presentation of the Slow and Shigh signals. Desynchronizing signals relative to one another had no impact on intelligibility when delays were less than ~45 ms. Longer delays resulted in a steep intelligibility decline, providing further evidence of integration or binding of information within restricted temporal windows. Our data suggest that human speech perception uses multi-time resolution processing. Signals are concurrently analyzed on at least two separate time scales, the intermediate representations of these analyses are integrated, and the resulting bound percept has significant consequences for speech intelligibility—a view compatible with recent insights from neuroscience implicating multi-timescale auditory processing.
|
|
Keyword:
Psychology
|
|
URL: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4468943/ https://doi.org/10.3389/fnins.2015.00214
|
|
BASE
|
|
Hide details
|
|
6 |
"Change deafness" arising from inter-feature masking within a single auditory object
|
|
|
|
BASE
|
|
Show details
|
|
7 |
Cortical responses to changes in acoustic regularity are differentially modulated by attentional load
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Brain-speech alignment enhances auditory cortical responses and speech perception
|
|
|
|
In: ISSN: 0270-6474 ; Journal of Neuroscience, Vol. 32, No 1 (2012) pp. 275-81 (2012)
|
|
BASE
|
|
Show details
|
|
9 |
Brain–Speech Alignment Enhances Auditory Cortical Responses and Speech Perception
|
|
|
|
BASE
|
|
Show details
|
|
|
|