1 |
Integrating when and what information in the left parietal lobe allows language rule generalization
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Integrating when and what information in the left parietal lobe allows language rule generalization
|
|
|
|
In: PLoS Biol (2020)
|
|
BASE
|
|
Show details
|
|
3 |
Searching High and Low: Prosodic Breaks Disambiguate Relative Clauses
|
|
|
|
BASE
|
|
Show details
|
|
4 |
The interplay between semantic and phonological constraints during spoken-word comprehension
|
|
|
|
In: ISSN: 0048-5772 ; EISSN: 1469-8986 ; Psychophysiology ; https://hal.univ-lille.fr/hal-01911767 ; Psychophysiology, Wiley, 2015, 52 (1), pp.46--58. ⟨10.1111/psyp.12285⟩ (2015)
|
|
BASE
|
|
Show details
|
|
5 |
Synchronization by the hand: the sight of gestures modulates low-frequency activity in brain responses to continuous speech
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Beat gestures and speech processing: when prosody extends to the speaker's hands
|
|
|
|
In: TDX (Tesis Doctorals en Xarxa) (2015)
|
|
BASE
|
|
Show details
|
|
7 |
Effect of attentional load on audiovisual speech perception: evidence from ERPs
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Effect of attentional load on audiovisual speech perception: evidence from ERPs
|
|
|
|
BASE
|
|
Show details
|
|
9 |
Neural correlates of audiovisual speech processing in a second language
|
|
|
|
BASE
|
|
Show details
|
|
13 |
Visual information constrains early and late stages of spoken-word recognition in sentence context
|
|
|
|
In: ISSN: 0167-8760 ; International Journal of Psychophysiology ; https://hal.univ-lille.fr/hal-01911769 ; International Journal of Psychophysiology, Elsevier, 2013, 89 (1), pp.136--147. ⟨10.1016/j.ijpsycho.2013.06.016⟩ (2013)
|
|
BASE
|
|
Show details
|
|
14 |
Cross-modal predictive mechanisms during speech perception
|
|
|
|
In: TDX (Tesis Doctorals en Xarxa) (2013)
|
|
BASE
|
|
Show details
|
|
16 |
Cross-Modal Prediction in Speech Perception
|
|
|
|
Abstract:
Speech perception often benefits from vision of the speaker's lip movements when they are available. One potential mechanism underlying this reported gain in perception arising from audio-visual integration is on-line prediction. In this study we address whether the preceding speech context in a single modality can improve audiovisual processing and whether this improvement is based on on-line information-transfer across sensory modalities. In the experiments presented here, during each trial, a speech fragment (context) presented in a single sensory modality (voice or lips) was immediately continued by an audiovisual target fragment. Participants made speeded judgments about whether voice and lips were in agreement in the target fragment. The leading single sensory context and the subsequent audiovisual target fragment could be continuous in either one modality only, both (context in one modality continues into both modalities in the target fragment) or neither modalities (i.e., discontinuous). The results showed quicker audiovisual matching responses when context was continuous with the target within either the visual or auditory channel (Experiment 1). Critically, prior visual context also provided an advantage when it was cross-modally continuous (with the auditory channel in the target), but auditory to visual cross-modal continuity resulted in no advantage (Experiment 2). This suggests that visual speech information can provide an on-line benefit for processing the upcoming auditory input through the use of predictive mechanisms. We hypothesize that this benefit is expressed at an early level of speech analysis.
|
|
Keyword:
Research Article
|
|
URL: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3187777 https://doi.org/10.1371/journal.pone.0025198 http://www.ncbi.nlm.nih.gov/pubmed/21998642
|
|
BASE
|
|
Hide details
|
|
17 |
Perceptual load influences auditory space perception in the ventriloquist aftereffect
|
|
|
|
BASE
|
|
Show details
|
|
|
|