DE eng

Search in the Catalogues and Directories

Page: 1 2
Hits 1 – 20 of 26

1
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users
In: ISSN: 2077-0383 ; Journal of Clinical Medicine ; https://hal.archives-ouvertes.fr/hal-03413817 ; Journal of Clinical Medicine, MDPI, 2021, 10 (10), pp.2093. ⟨10.3390/jcm10102093⟩ (2021)
BASE
Show details
2
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users
In: ISSN: 2077-0383 ; Journal of Clinical Medicine ; https://hal.archives-ouvertes.fr/hal-03375356 ; Journal of Clinical Medicine, MDPI, 2021, 10 (10), pp.2093. ⟨10.3390/jcm10102093⟩ (2021)
BASE
Show details
3
The Iambic Trochaic Law ...
Wagner, Michael. - : Open Science Framework, 2021
BASE
Show details
4
The Iambic Trochaic Law ...
Wagner, Michael. - : Open Science Framework, 2021
BASE
Show details
5
The Iambic Trochaic Law ...
Wagner, Michael. - : Open Science Framework, 2021
BASE
Show details
6
Sound source segregation of multiple concurrent talkers via Short-Time Target Cancellation
BASE
Show details
7
Complex acoustic environments: concepts, methods and auditory perception
Weisser, Adam. - : Sydney, Australia : Macquarie University, 2018
BASE
Show details
8
Temporal processing in audition: insights from music
BASE
Show details
9
Early cortical metabolic rearrangement related to clinical data in idiopathic sudden sensorineural hearing loss
Schillaci, O; Alessandrini, M; Chiaravalloti, A. - : ELSEVIER SCIENCE BV, 2017
BASE
Show details
10
Using Energy Difference for Speech Separation of Dual-microphone Close-talk System
In: http://www.sensorsportal.com/HTML/DIGEST/may_2013/Special_issue/P_SI_353.pdf (2013)
BASE
Show details
11
Cortical responses to changes in acoustic regularity are differentially modulated by attentional load
BASE
Show details
12
TEMPORAL CODING OF SPEECH IN HUMAN AUDITORY CORTEX
Ding, Nai. - 2012
BASE
Show details
13
Monaural speech separation and recognition challenge
: Elsevier, 2011
BASE
Show details
14
Monaural speech separation and recognition challenge
In: ISSN: 0885-2308 ; EISSN: 1095-8363 ; Computer Speech and Language ; https://hal.archives-ouvertes.fr/hal-00598185 ; Computer Speech and Language, Elsevier, 2009, 24 (1), pp.1. ⟨10.1016/j.csl.2009.02.006⟩ (2009)
BASE
Show details
15
Monaural Speech Segregation by Integrating Primitive and Schema-Based Analysis
In: DTIC (2008)
BASE
Show details
16
Hearing VS. Listening: Attention Changes the Neural Representations of Auditory Percepts
xiang, juanjuan. - 2008
BASE
Show details
17
A Computational Auditory Scene Analysis System for Speech Segregation and Robust Speech Recognition
BASE
Show details
18
Isolating the Energetic Component of Speech-on-Speech Masking With Ideal Time-Frequency Segregation
In: DTIC (2006)
BASE
Show details
19
Speech recognition with amplitude and frequency modulations
In: Zeng, F G; Nie, K; Stickney, G S; Kong, Y Y; Vongphoe, M; Bhargave, A; et al.(2005). Speech recognition with amplitude and frequency modulations. Proceedings of the National Academy of Sciences of the United States of America, 102(7), 2293 - 2298. UC Irvine: Retrieved from: http://www.escholarship.org/uc/item/1tn280m7 (2005)
BASE
Show details
20
ARSTREAM: A Neural Network Model of Auditory Scene Analysis and Source Segregation
Cohen, Michael; Grossberg, Stephen; Wyse, Lonce; Govindarajan, Krishna. - : Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems, 2003
Abstract: Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided. ; Air Force Office of Scientific Research (F49620-01-1-0397, F49620-92-J-0225); Office of Naval Research (N00014-01-1-0624); Advanced Research Projects Agency (N00014-92-J-4015); British Petroleum (89A-1204); National Science Foundation (IRI-90-00530); American Society of Engineering Education
Keyword: Adaptive Resonance Theory (ART); ART; Auditory scene analysis; Cocktail party problem; Neural networks; Pitch perception; Resonance; Spatial localization; Spectral-pitch resonance; Streaming
URL: https://hdl.handle.net/2144/1914
BASE
Hide details

Page: 1 2

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
1
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
25
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern