Home » Workshop 2019 – Talks

Workshop 2019 – Talks

Speaker: Jana Bašnáková   
Co-authors:
Title: “ 'It’s hard to give a good talk' – the neural correlates of interpreting implicit meaning"
Affiliations:Slovak Academy Of Science
Abstract: Even though language allows us to say exactly what we mean, we often use it to express things in a way that depends on the specific communicative context. One of the big puzzles in language science is how listeners work out what speakers really mean. In this talk, I will first outline the psycholinguistic approach to what is meaning, contrasting sentence level meaning with “what is said” and “what is implicated”. Then, I will focus on the comprehension of indirect replies as an example of implicated meaning and present two functional magnetic resonance (fMRI) studies. Both studies compared utterances identical at the sentence level, rendered direct or indirect only by the preceding conversation between two actors. In the first one (Bašnáková, Weber, Petersson, van Berkum, & Hagoort, 2014), we compared indirect replies with face-saving motives (“Did you like my presentation?” “It’s hard to give a good presentation.”), to more neutral, informative indirect replies (the same target answer preceded by “Will you choose a poster or a presentation for the student conference?”) and baseline direct replies (“How difficult is it to give a good presentation?”). To make the indirect face-saving replies more personally relevant to the participants, we constructed a mock Job interview paradigm where the participants acted either as direct addressees or as over-hearers of direct and face-saving indirect replies (Bašnáková, van Berkum, Weber, & Hagoort, 2015). I will discuss the main findings of these studies in terms of what brain networks, besides the left-lateralized core language regions, underlie language interpretation in the social context.
Speaker: Robert Baumgartner   
Co-authors: Yuqi Deng, Boston University Inyong Choi, University of Iowa Barbara Shinn-Cunningham, Carnegie Mellon Brigitta Tóth, Hungarian Academy of Sciences
Title: "Effects of spatial auditory cue realism on selective attention control and future perspectives on studying perceptual learning of these cues across the human lifespan"
Affiliations: Austrian Academy of Sciences
Abstract: patial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished/isolated spatial cues and may have underestimated the neural effects. To demonstrate that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks, we tested listeners in a streaming task while assessing electroencephalographic markers of selective attention. Spatial attention significantly modulated initial cortical response magnitudes only for natural cue combinations, but not in conditions using isolated cues. Consistent with this, parietal oscillatory power in the alpha band showed less attentional modulation with isolated spatial cues than with natural cues. Another very important task of the auditory system is to constantly monitor the environment to protect us from harmful events such as collisions with approaching objects. Auditory looming bias is an astoundingly fast perceptual bias favoring approaching compared to receding auditory motion and was demonstrated behaviorally even in infants of four months in age. The role of learning in developing this perceptual bias and its underlying mechanisms are yet to be investigated at the different stages of life. While newborns already possess basic skills of spatial hearing, adults are still able to adapt to changing circumstances such as modifications of spectral-shape cues that are naturally induced by the human pinna. As we recently showed that changes in the salience of spectral-shape cues can be used to elicit auditory looming bias, we will use these cues to jointly investigate auditory looming bias and auditory plasticity.
Speaker: Virginia Best 
Co-authors:
Title: "Investigating a visually guided hearing aid"
Affiliations: Boston University
Abstract: Understanding speech in noise continues to be the primary complaint of listeners with hearing loss. One of the few available ways to improve speech intelligibility in noise is to preferentially amplify sounds from one direction, and many sophisticated beamforming algorithms now exist that combine the signals from multiple microphones to create extremely narrow spatial tuning. In this talk I will discuss two conceptual issues with beamformers. First, the primary output is generally a single channel in which binaural information is lost. This has implications for locating sounds in the environment and for segregating competing sounds (e.g., in “cocktail party” scenarios). We have been exploring several strategies for preserving binaural information, and I will describe two experiments that evaluated these strategies. The second issue is that beamformers typically emphasize one fixed direction, whereas the target of interest in many real- world situations can change location rapidly and unpredictably (e.g., in a group conversation). As a solution to this, we have developed a visually guided hearing aid, in which the user can flexibly control the acoustic look direction of the beamformer using eye gaze. I will describe an experiment that evaluates this concept using a novel question-and-answer task.
Speaker: Lauren Calandruccio  
Co-authors:
Title: "Masked-sentence recognition: the effect of target and masker speech similarity"
Affiliations: Case Western Reserve University, Ohio
Abstract: Speech recognition can be challenging when multiple people are talking at the same time. The difficulty that people experience in these types of listening scenarios is often associated with informational masking. Two main stimulus features have been suggested to increase informational masking: stimulus uncertainty and target/masker similarity. A recent focus of our lab has been to try to improve the definition of target/masker “similarity” with respect to speech-on-speech recognition. To do this, we tested masked-sentence recognition by manipulating specific features of the target and masker speech. In this presentation, data from listeners with normal hearing will be presented to explore the similarity between sentence-level semantic meaning of the target and masker speech, the fundamental frequency contour of the target and masker speech, and differences in sentence- level coarticulation of the target and masker speech.
Speaker: Inyong Choi   
Co-authors: Kyung-Joong Kim, Nicholas Giuliani, Subong Kim, Camille Dunn, Ruth Litovsky, and Bruce Gantz
Title: “Adapting to simultaneous electric and acoustic stimulation for word-in-noise recognition in listeners with single-sided deafness"
Affiliations: The University of Iowa
Abstract: Sudden occurrence of single-sided deafness (SSD) is not a rare case; at University of Iowa Hospitals and Clinics, of the 426 patients implanted since 2010, 53 were SSD (12%). SSD listeners experience difficulty in speech-in-noise understanding due to the lack of binaural benefits in speech unmasking, for which cochlear implantation (CI) in the deafened ear can be a solution. However, it is unclear whether CI recovers crucial central auditory functions for speech-in-noise understanding despite the sensory disparity between acoustic and electric stimulation. Our contradictory hypotheses are: (1) CI in the deafened ear will be beneficial by facilitating binaural release from masking, although (2) CI-induced ambiguity in speech cues may degrade phonological processing. We tested the above hypotheses by characterizing neurophysiological substrates of CI-induced changes in two consecutive but separated time windows that correspond to 1) speech unmasking and 2) phonological processes, respectively, using simultaneous pupillometry and electroencephalographic (EEG) recordings during a word-in-noise recognition task. We found that CI reduces pupil dilation during the speech-unmasking stage, although it increases pupil dilation during the later word presentation/retrieval stage where listeners must exert more effort to fuse their electrical and acoustic inputs. A consistent result has been found from EEG; CI induced weaker frontal and occipital alpha oscillation during the speech unmasking, but greater alpha after receiving the target word. Behaviorally, CI improved speech-in-noise understanding accuracy, while delayed reaction time. Based on these results, we claim that CI in SSD listeners has mixed effects for speech-in-noise understanding; As CI provides easier recognition of the background noise direction, it decreases cognitive load during speech unmasking period. However, CI increases the processing load after receiving speech cues from target speech. Following studies will investigate whether long-term CI use reduces such processing load during speech perception.
Speaker: John Culling 
Co-authors: John F. Culling 1 , Sam Jelfs 2, Jacques Grange 1, Barry Bardsley 1, Elli Ainge 1, Mathieu Lavandier 3
Title: "How to optimise speech intelligibility in rooms"
Affiliations: 1 School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, CF10 3YG. U.K. 2 Philips Research Europe, Eindhoven, The Netherlands. 3 Université de Lyon, Ecole Nationale des Travaux Publics de l'Etat, Département Génie Civil et Bâtiment, Unité CNRS 1652, Rue M. Audin, 69518 Vaulx-en-Velin Cedex, France
Abstract: A model of spatial release from masking in reverberant environments has proved successful in predicting a wide range of empirical data. Here, we show how the model can be used to successfully predict optimal behavior in such environments. Head orientation of around 30° away from the target speaker was predicted, and then shown empirically to improve speech intelligibility for normally hearing listeners, hearing-impaired listeners and cochlear implant users. This degree of head orientation was also shown to be compatible with lip-reading and robust to reverberation. Conventional directional microphones were predicted to work well in reverberation, and also produce better intelligibility with a head turn. However, location within a room, even one uniformly distributed with interfering sound sources, was predicted to have a strong influence on the effective signal-to-noise ratio and on the benefits of head orientation and directional microphones. Certain locations were predicted to bring all these benefits together and others to largely confound them. These differences were confirmed empirically for the benefits of signal-to-noise ratio and head orientation using virtual acoustics.
Speaker: Erick Gallun 
Co-authors: Frederick Gallun, VA Portland Health Care System and Oregon Health and Science University Aaron Seitz, UC Riverside Lauren Calandruccio, Case Western Reserve University Pamela Souza, Northwestern University Esteban Lelo de Larrea-Mancera, UC Riverside Kasey Jakien, Oregon Health and Science University Tess Koerner, VA Portland Health Care System
Title: "Flipping the laboratory: Clinical research tools for bringing psychoacoustical testing to the patient"
Affiliations: National Center for Rehabilitative Auditory Research, Portland and OHSU
Abstract: Modern consumer electronics have progressed to the point where auditory and visual stimuli can be controlled and presented as carefully in the field as has traditionally been done in the laboratory. Furthermore, the costs of this equipment is a fraction of what specialized laboratory equipment would cost. This talk will describe the Portable Automated Rapid Test (PART) system, which is free software that runs on easily obtainable hardware and can present experimental tests that measure psychoacoustical performance as accurately as any laboratory system currently in use. Currently implemented tests of binaural sensitivity, spectral, temporal, and spectrotemporal modulation detection, and spatial release from masking will be described. Performance estimates for these tests collected on listeners who vary substantially in age and hearing ability will be shown to be as accurate as those from published laboratory measures of the same tests and similar listener groups. Implications of the vailability of such measures for increasing the rigor, reproducibility, and efficiency of clinical testing will be discussed.
Speaker: Andrej Kráľ 
Co-authors: Prasandhya A. Yusuf 1 , Peter Hubka 1 , Jochen Tillein 1,2 , Andrej Kráľ 1,3
Title: "Effective Connectivity Between Primary and Secondary Cortical Areas is Shaped by Early Hearing"
Affiliations: 1 Hannover Medical School, Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover, Germany. 2 J.W. Goethe University, Department of Otorhinolaryngology, Frankfurt am Main, Germany. 3 School of Medicine and Health Sciences, Macquarie University, Sydney, Australia.
Abstract: Cochlear stimulation activates the auditory cortex via thalamocortical inputs. Cortical responses are subsequently embedded into ongoing cortical processing via corticocortical interactions, providing information on the context of the stimulus. Stimulus related activity is reflected in local field potentials (LFPs) in the form of evoked responses (phase-locked to the stimulus, reflecting the thalamic input) and induced responses (non-phased-locked activity, representing corticocortical processing). The effect of auditory experience on evoked and induced responses in the primary auditory cortex (A1) and a higher-order auditory field (posterior auditory field, PAF) was evaluated using time-frequency representations (TFR) of auditory responses in adult hearing controls (HCs) and congenitally deaf cats (CDCs, Kral and Sharma, 2012, Trends Neurosci; Kral et al., 2019 Ann Rev Neurosci). Evoked and induced TFR power was calculated using wavelet analysis (Yusuf et al., 2017, Brain). Coupling strength between A1 and PAF was estimated using weighted phase-lag index, pairwise phase consistency and Granger causality. The evoked responses appeared mainly at early latency (<100ms) while induced responses appeared more abundant at long latencies (>100ms), corresponding to their assumed role in thalamocortical vs. corticocortical processing, respectively. In HCs, electric stimulation resulted in reduced induced activity compared to acoustic stimulation, indicating the effect of the stimulation mode on the induced responses. The comparison of electrically elicited responses between HC and CDC showed no significant effect of deafness on A1 evoked responses, but a near loss of A1 and PAF induced responses in CDCs, particularly at longer latencies. Furthermore, the coupling between A1 and PAF was significantly smaller in CDCs. The results demonstrate that developmental hearing experience shapes the auditory connectome and allows integration of sensory input into corticocortical ongoing processing in the cortex, and thus the integration of sensory stimuli into the context and the internal model of the environment.
Speaker: Norbert Kopčo & Eleni Vlahou  
Co-authors: Keerthi Doreswamy 1,2 , Jyrki Ahveninen 2
Title: "Adaptation to Reverberation in Speech and Distance Perception"
Affiliations: 1 Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia 2 Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown MA 02129
Abstract: Reverberation affects many aspects of auditory perception. In speech perception, the effect of reverberation is mostly detrimental, as the phonetic features are distorted by reverberation, while in distance perception the effect is beneficial, as the direct-to-reverberant energy ratio (DRR) can be used as a level-independent distance cue. Here we present two studies, one focusing on speech and the other on distance processing. In the speech perception experiment, we examined how previous exposure to consistent vs. inconsistent reverberation enhances vs. disrupts consonant identification in a reverberant room. Our results suggest that short-term exposure to a consistent room facilitates the perception of a wide range of speech sounds. However, the effect varies substantially across different rooms and phonemes, and may be diminished for certain sounds presented in very challenging listening environments. In the distance study we performed a combined behavioral and fMRI experiment to examine how distance perception and its neural representation depends on the direction from which the stimulus is presented. Distance performance is expected to vary with direction, as DRR is the only level-independent distance cue for frontal targets, while DRR and ILD (interaural level difference) are available for lateral cues. Here, behavioral distance performance was better for sources coming from the side, illustrating how cue weighting adapts to reflect the availability of the cues in different directions. fMRI activations to sounds varying in distance were similar for the two directions, localized in planum temporale and superior temporal gyrus regions, with the strongest activations centered in posterior auditory cortex areas. This suggests that the identified distance areas contain representation independent of distance or binaural cues. Taken together, these studies illustrate the importance of adaptive reverberation processing for everyday listening.
Speaker: Bernhard Laback 1 
Co-authors: Maike Ferber 1, Norbert Kopco 2
Title: "Re-weighting of binaural cues based on visual feedback"
Affiliations: 1 Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria 2 Institute of Computer Science , P. J. Šafárik University, Košice, Slovakia.
Abstract: Azimuthal sound localization is often assumed to rely on a fixed frequency-weighted combination of interaural time difference (ITD) and interaural level difference (ILD) cues. However, there are several reasons why the auditory system may need to dynamically adapt the binaural cue weights, including temporary physiological changes in sound transmission due to middle ear infections, or variable room acoustics. Such plasticity in binaural cue weighting may also contribute to the low or sometimes completely absent ITD sensitivity of listeners supplied with bilateral cochlear implants (CIs): Since their clinical devices do not reliably convey ITD cues, listeners may learn to "ignore" ITD cues and completely rely on ILD cues in sound localization. In this talk, we first present results of a lateralization experiment testing the hypothesis that normal-hearing listeners adapt their binaural cue weights, if visual feedback consistently reinforces one of the two cues while the other cue points to various spatially inconsistent azimuthal positions. The stimulus was a band-pass filtered noise centered at 2.8 kHz. The results for two listener groups, for which either ITD or ILD cues were visually reinforced (N=10 each) in a seven-day virtual audio-visual lateralization training, indeed showed increased weight of the reinforced cue in a posttest following the training compared to a pretest. For the ILD group, the reweighting occured already in the first training session, whereas the amount of training required for the ITD group was not clearly determinable. Second, we present preliminary results from an experiment with CI listeners trained on ITD cues. Stimuli were pulse trains with rates of either 100 or 300 pulses/s, the latter condition yielding overall lower ITD sensitivity. The results for the one listener tested so far showed significantly increased ITD weighting (within-subject statistics) in the post vs. pretest for the more difficult 300-pps condition, but no significant effect for the 100-pps condition. Overall, the results for the NH listeners and for the CI listener tested so far suggest plasticity in the weighting of binaural localization cues depending on the demands of the environment.
Speaker: Piotr Majdak 
Co-authors: Robert Baumgartner
Title: "Computational models for listener-specific predictions of spatial audio quality"
Affiliations: Austrian Academy of Sciences
Abstract: Millions of people use headphones every day for listening to music, watching movies, or communicating with others. Nevertheless, sounds presented via headphones are usually perceived inside the head instead of being localized at a naturally external position. Besides externalization and localization, spatial hearing also involves perceptual attributes like apparent source width, listener envelopment, and the ability to segregate sounds. The acoustic basis for spatial hearing is described by the listener-specific head-related transfer functions (HRTFs). In this talk, we will focus on the dimensions of sound localization that are particularly sensitive to listener-specific HRTFs, that is, along sagittal planes (i.e., vertical planes being orthogonal to the interaural axis) and near distances (sound externalization/internalization). We will discuss recent findings from binaural virtual acoustics and models aiming at predicting sound externalization and localization in sagittal planes considering the listener’s HRTFs. We aim to shed light onto the diversity of cues causing degraded sound externalization with spectral distortions by conducting a model-based meta-analysis of psychoacoustic studies. As potential cues we consider monaural and interaural spectral-shapes, spectral and temporal fluctuations of interaural level differences, interaural coherences, and broadband inconsistencies between interaural time and level differences in a highly comparable template-based modeling framework. Mere differences in sound pressure level between target and reference stimuli were used as a control cue. Our investigations revealed that the monaural spectral-shapes and the strengths of time-intensity trading are potent cues to explain previous results under anechoic conditions. However, future experiments will be required to unveil the actual essence of these cues.
Speaker: Petr Maršálek  
Co-authors: Zbynek Bures, College of Polytechnics, Tolsteho~16/1556, 586~01, Jihlava, Czech Republic
Title: "Just noticeable differences in low frequencies below 500 Hz, loudness, localization; model and psychophysics"
Affiliations: Institute of Pathological Physiology, First Medical Faculty, Charles University in Prague, U Nemocnice 5/478, 128 53, Praha 2, Czech Republic
Abstract: Low frequency sounds are encoded by phase-locked action potentials, onsets, and tonotopy. We use dead-time Poisson process as model for early sound processing. To simplify cortical auditory processing, we use model based on well known ideal observer description. We compare our prediction of loudness and azimuth perception with psychophysical experiments.
Speaker: Catarina Mendonca
Co-authors:
Title: “Changes in auditory space following audiovisual experience”
Affiliations: Department of Psychology, University of Azores
Abstract: Humans are constantly engaged in the extraction of spatial cues from sound and in the calibration of the localisation estimates formed from those cues. In this talk I review several approaches to induce auditory space calibration. Training with feedback is the mostly broadly implemented technique, but active training is the technique that leads to faster results. Experiments using active training are described and their long-term results discussed. Audiovisual experience may be the main mechanism to calibrate auditory space in day-to-day situations. It may also be the most effective, since calibration can occur after just milliseconds of experience. Two experiments are presented, which show the potential of audiovisual stimulation for auditory space calibration. The effect of stimulus consistency, time, and stimulation history was tested. The timelines of adaptation and the impact of each trial on the spatial estimates are presented.
Speaker: Josefa Oberem  
Co-authors: Iring Koch, Janina Fels
Title: "Examining auditory selective attention in complex acoustic environments"
Affiliations: RWTH Aachen University
Abstract: The topic of the present collaborative project (Medical Acoustics and Cognitive Psychology) is the exploration of cognitive control mechanisms underlying auditory selective attention. The aim is to examine the influence of variables that increase the complexity of the auditory scene with respect to technical aspects (dynamic binaural hearing with consideration of room acoustics and head movements) and that influence the efficiency of cognitive processing. Using a binaural-listening paradigm, the ability to intentionally switch auditory attention in various anechoic and reverberating setups was tested. The paradigm consists of spoken word pairs by two speakers which were presented simultaneously to subjects from two of eight azimuth positions. The stimuli consisted of a single number word, (i.e., 1 to 9), followed by either the direction ”UP” or ”DOWN” in German. Guided by a visual cue prior to auditory stimulus onset indicating the position of the target speaker, subjects were asked to identify whether the target number was numerically smaller or greater than five and to categorize the direction of the second word. Reproduction techniques and reverberation times were varied to analyze influences of the reproduction method in reaction times and error rates.
Speaker: John van Opstal 
Co-authors:
Title: "Perceived Target Range Shapes Human Sound-Localisation Behaviour"
Affiliations: Radboud University, Netherlands
Abstract: The auditory system relies on binaural differences and spectral pinna cues to localise sounds in azimuth and elevation. However, the acoustic input can be unreliable, due to uncertainty about the environment, and neural noise. A possible strategy to reduce sound-location uncertainty is to integrate the sensory observations with sensorimotor information from previous experience, to infer where sounds are more likely to occur. We investigated whether and how sound localisation performance is affected by the spatial distribution of target sounds, and changes thereof. We tested three different open-loop paradigms, in which we varied the spatial range of sounds in different ways. Participants adjusted their behaviour by rapidly adapting their stimulus-response gain to the target range, both in elevation and in azimuth. Notably, gain changes occurred without any exogenous feedback about performance. Our findings are explained by a model in which the motor-control system minimises its mean absolute response error across trials.
Speaker: Nelli Salminen 
Co-authors:
Title: "Neural correlates of human spatial hearing measured with EEG and MEG"
Affiliations: Aalto University, Finland
Abstract:
In EEG and MEG studies of human spatial hearing, there are often considerable differences between participants but such differences are usually ignored or treated as noise. It can be difficult to tell whether inter-individual differences are related to perception and behavioral performance in spatial hearing or simply due to differences in data uality. For example, some participants may seem more sensitive to spatial location than others when measured as peak amplitudes in event-related potentials or fields but it is not immediately clear whether this is related to genuine differences in the neural processing of spatial cues or a consequence of better data quality and larger peak amplitudes in general. Here, I present EEG/MEG studies that solve this problem by collecting both neural and psychoacoustical data on the same group of participants on the processing of binaural cues and the precedence effect in spatial hearing. This opens up the possibility to compare the inter-individual variability in EEG/MEG data to variability in behavioral performance. By finding the features in brain responses that correlate with behavioral performance, it was possible to identify meaningful inter-individual differences in the neural data that could predict behavioral performance in spatial cue discrimination. Further, by taking inter-individual variability in the listening task into consideration, it was possible to identify neural correlates of precedence effect and its buildup in MEG data while group-level analyses did not reveal any significant results.
Speaker: Dan Sanes  
Co-authors:
Title: "Learning and attention enhance cortex neuron sensitivity during auditory task performance"
Affiliations: New York University
Abstract:
Sensory performance can vary as a function of a subject’s skill level or attentional state. To explore the underlying neural mechanisms, we recorded telemetrically from auditory cortex as freely moving gerbils attended to, and trained on, psychometric tasks. Both task engagement and practice led to an improvement in the sensitivity of individual auditory cortex neurons. Furthermore, these neural mechanisms were diminished in animals that displayed inferior perceptual skills. For example, adolescent gerbils improved more slowly than adults on an auditory psychometric task, and their auditory cortex neurons also displayed a delayed improvement in sensitivity. Similarly, gerbils reared with hearing loss displayed poorer psychometric performance that was associated with degraded auditory cortex neuron sensitivity to the relevant stimuli during task performance. Together, these results suggest that dynamic changes to auditory cortex encoding can explain, in part, the sensory capacity of individual subjects.
Speaker: Aaron Seitz  
Co-authors:
Title: “Gamifying Perceptual Learning”
Affiliations: University of California, Riverside
Abstract: Currently approaches to address the hearing needs of people with central auditory processing deficits are limited. At issue is that while research of perceptual learning demonstrates that a wide range of perceptual abilities can be improved with training, many of these learning effects are highly specific to the trained context. To overcome these limitations of prior work, we’ve been examining how training on a wider range of stimuli and in engaging contexts, such as video games, can potentially give rise to training effects that will transfer to ecological hearing conditions. In current research we are examining possible benefits of a new training game that we developed, called Listen (https://braingamecenter.ucr.edu), in both normal hearers and those with central auditory processing deficits. Listen integrates an engaging video-game design with recent knowledge in psychophysics and cognitive neuroscience, as well as principles of perceptual learning to adaptively train participants on 1) discrimination of the direction of spectro-temporal modulated sounds designed based on the organization of receptive fields in the auditory cortex; 2) discrimination of the location of spatialized speech-like stimulation in virtual space, and 3) an auditory working memory N-back task using speech-like tokens. In the present talk, we discuss, this research approach, providing some early stage results of this research effort.
Speaker: Filip Smolík 
Co-authors:
Title: "Adaptation and learning in early language acquisition"
Affiliations: Academy of Sciences of the Czech Republic
Abstract: Child language is very obviously a learning process. Some parts of this learning must happen rather early; even newborns recognize some properties of their native language. The talk will present three topics that show different aspects of early learning, in which adaptation to standard stimuli precedes active use of linguistic structures. The first is the sensitivity to foreign accents in infants aget 4-6 months, which shows that these children can distinguish native from nonnative productions. This demonstrates that the early language learning includes adaptation to the familiar auditory material. Dishabituation in response to violations may be viewed as the opposite of violations. The talk will further demonstrate the responses of Czech two-year olds to violations of Czech grammar, suggesting that they are adapted to certain structures even though they do not use them actively. Children and adults also tend to adapt to the recent language experience, wich surfaces as the phenomenon of syntactic priming. The talk with show how syntactic priming may be viewed as a mechanism of learning that can also serve as a means of examining the knowledge already present in children's linguistic systems.
Speaker: Brigitta Tóth 
Co-authors: Brigitta Tóth 1,3, Darrin K. Reed 1,2, Orsolya Szalárdy 3,4, István Winkler3, Barbara Shinn- Cunningham 1,2,5
1 Center for Computational Neuroscience & Neural Technology, Boston University, Boston, USA
2 Department of Biomedical Engineering, Boston University, Boston USA
3 Institute of Cognitive Neuroscience and Psychology, Center for Natural Sciences, Hungarian Academy of Sciences
4 Institute of Behavioural Sciences, Faculty of Medicine, Semmelweis University, Budapest, Hungary
5 Carnegie Mellon Neuroscience Institute, Department of Biomedical Engineering, College of Engineering, Carnegie Mellon University
Title: "Top-down and bottom-up attention bias on change detection in auditory foreground and background"
Affiliations: Hungarian Academy of Sciences
Abstract:
Listening in noisy environments is a fundamental skill for survival and social interaction. This skill depends on the ability to integrate sounds elements into a meaningful object while perceptually separating it from the rest of the acoustic environment (termed Figure-Ground Segregation - FGS). The present study aimed to identify automatic and controlled attentional mechanisms related to detecting changes in auditory objects of a cluttered sensory environment. Here we implemented acoustic stimuli that were composed of a repeating inharmonic tone complex (figure) that can be perceptually segregated from simultaneous randomly varying tones (background). Electrophysiological responses elicited by the figure and background tones deviating in intensity (occurring with 50% pattern/trial probability) were measured. The deviant tones were either task-relevant (active listening) or listeners (N=16) performed a visual working memory task (passively listening). In separate stimulus blocks of the active listening condition, listeners were asked to selectively attend and report changes attributed either to figure or to the background. Cortical sources were reconstructed from high-density EEG. Event-related responses (N200 and P300) were evaluated in from 5 regions of interest from each hemisphere. Listeners performed better in detecting changes in the figure compared to the background. In the active but not in the passive listening condition, the intensity changes elicited both N200 and P300. The deviance-related evoked responses were stronger for figure than for the background target tones. The P300 amplitude was higher for attended relative to unattended deviants, and this effect was stronger for the figure than for the background deviants. Our results suggest that 1) deviations in an object of a complex scene do not automatically capture attention 2) Controlled attention has larger effects on coherent sound patterns (the figure) than on the background, supporting object- based theories of attention.
Speaker: Beverly Wright  
Co-authors:
Title: "Auditory perceptual learning"
Affiliations: Northwestern University
Abstract:
Performance on many perceptual tasks improves with practice, indicating that our sensory systems are not rigid but rather can be changed through experience. My coworkers and I have been investigating the factors that induce and those that prevent perceptual learning on auditory skills, including how those factors change with age and are affected by sensory and cognitive disorders. Conclusions drawn from learning on fine-grained auditory discrimination tasks have held for visual and speech learning, suggesting that common principles are at play across multiple domains. Knowledge of these issues will lead to more effective perceptual training strategies to aid rehabilitation and promote skill enhancement.

Contact


PI Office:
Room: 2.16T,
Tel: +421 55 234 2450

Behaviour Lab:
Room: 4.03T,
Tel: +421 55 234 2461

EEG lab (as of Sept 2020 moved to Psychology Dept):
Psychology Dept
Room 13
Plato Building
Moyzesova 9
P. J. Šafárik University
Košice
Slovakia

US Google voice number
:
+1 617 299 1253
Fax: +1 484 727 9884

Map