Home » Contributed talks and posters – 2019:

Contributed talks and posters – 2019:

Speaker: Maike Ferber 1,2,3 
Co-authors: Bernhard Laback 2 , Aaron Seitz 4 , Norbert Kopčo 3
Title of the talk: "Towards Developing a Discrimination Task that Induces Reweighting of Binaural Localization Cues"
Affiliations: 1 University of Vienna, Austria; 2 Acoustics Research Institute, Austrian Academy of Sciences; 3 Pavol Jozef Šafárik University in Košice, Slovakia; 4 University of California, Riverside, USA
Abstract:Background: Adaptation to altered sound localization cues has been extensively studied, highlighting the plasticity of the auditory system. This adaptation can either be a result of the establishment of a new spatial map of the altered cues or a stronger relative weighting of unaltered compared to altered cues, referred to as reweighting. A recent study in our lab showed that reweighting of the two binaural cues interaural time difference (ITD) and interaural level difference (ILD) can be achieved with a localization training in virtual reality. We now seek to develop a simpler training method to make the training more accessible for a wide range of listeners. Methods: A series of pilot experiments was performed including 3 days of training using a two- alternative forced-choice staircase procedure for left/right discrimination. Stimuli were 500ms narrow-band white noise bursts (one octave bandwidth, geometrically centered at 2.8 kHz), containing various combinations of spatially inconsistent ITD and ILD. Feedback (correct/incorrect) always followed the ILD location. In the different pilot experiments, we manipulated the reweighting- assessment task (localization task in virtual reality vs. relative discrimination task), the training task (i.e., absolute vs. relative discrimination), the cue disparities used in the training, the adaptive procedure (2-down 1-up vs. 3-down 1-up) and whether or not incorrect responses led to a repetition of the auditory stimulus combined with the correct response shown on the screen, to determine the optimal parameters for a discrimination task that induces ITD/ILD reweighing. Results: In all pilot experiments, discrimination thresholds did not differ significantly between the training sessions. With respect to the pre/post assessment, the best results were obtained in the pilot experiment consisting of a relative discrimination assessment and a relative discrimination training using a 2-down 1-up adaptive procedure in which incorrect responses led to a repetition of the auditory stimulus.
Conclusion: Factors contributing to the promising results of the final pilot experiment likely include that the assessment and training task were the same (i.e., relative discrimination), ensuring that the assessment task was practiced sufficiently and no transfer between different tasks was needed, and that the repetition of the auditory stimulus after incorrect trials combined with the correct response shown on the screen allowed for bottom-up multisensory integration. Even though it depends on the specific parameters, these results suggest that binaural cue reweighting can also be achieved by using a simple discrimination task.
Speaker: Anna Arkhipova  
Co-authors: Pavel Hok 1, Jan Valošek 1,2, Gabriela Všetičková 3, Vít Zouhar 3, Petr Hluštík 1
Title: Search for brain plasticity after creativity training with music composing: the “Different Hearing” project
Affiliations: 1 Department of Neurology, Palacký University Olomouc, Czech Republic 2 Department of Biomedical Engineering, University Hospital Olomouc, Czech Republic 3 Department of Music Education, Palacký University Olomouc, Czech Republic
Abstract: Different Hearing project (Slyšet jinak) is an alternative music educational programme to stimulate pupils’ creativity by music composing in the classroom. In the workshop, participants are trained to discover a new sonic world created by their own bodies/voices and instruments made from the items of everyday life, as well as to improvise/compose music, then create graphic scores and perform them. We hypothesized that this short-term intense workshop would induce plastic changes of the brain systems engaged in music perception and music creativity, especially their network connectivity and response to diverse auditory stimuli, along with behavioural effects. In our study, 22 healthy university students participated in the workshop over two days and underwent fMRI examinations twice – before and after the workshop, meanwhile 24 students were also scanned as a control group. Besides resting-state fMRI and DWI data, task-related BOLD fMRI was obtained while each subject was listening to musical and non-musical sound samples followed by pressing a button (like/dislike) for each sample. Using paired sample t-test on the button pressing task, we have observed that favourable feeling towards non-musical sound samples were significantly increased only in the active group. fMRI data is being analyzed using ANOVA with F-test and ROI analysis. The study was supported by OpenAccess grant of Czech-Bioimaging (LM2015062) to the MAFIL core facility of CEITEC, MUNI, Brno.
Speaker: I-Fan Lin  
Co-authors: Takashi Itahashi, Makio Kashino, Nobumasa Kato, Ryu-ichiro Hashimoto
Title: "Understanding the nature of speech processing in autism: insights from imaging studies for acoustically degraded speech"
Affiliations: Shuang Ho Hospital, Taiwan
Abstract: Individuals with autism spectrum disorders (ASD) are known for impaired communication, and their impaired communication further affects their social learning and cooperation. Many individuals with ASD complain about the distress when they try to join social conversation due to difficulties in understanding speech when background noise is present. Previous speech-in-noise neurophysiology and neuroimaging studies with clear speech did not reveal how individuals with ASD perceive degraded speech differently. In the present study, we measured brain activities and functional connectivity when participants listened to clear speech (CS), noise-vocoded speech of 8 channels (VS), and rotated VS (RVS). Twenty one adult males with ASD and 24 age-matched neurotypical males (NT) participated in this study, and all of them were right-handed. During MRI scanning, participants listened to sentences and judged intelligibility. Functional connectivity analyses were also conducted by using generalized psychophysiological interaction (gPPI) analysis. The effect of acoustic degradation was measured by the difference between CS and VS, and there was no significant group difference. The effect of intelligibility was measured by the difference between VS and RVS, and compared to the NT group, the ASD group exhibits increased cortical activations in the right inferior frontal cortex (IFC). Furthermore, the activations in right IFC/insula were correlated with their speech understanding of the tested VS sentences. The effect of task difficulty was measured by the difference between VS and CS plus RVS (VS-(CS+RVS)/2), and there was no significant group difference. However, in functional connectivity analysis, group comparisons revealed that compared to the NT group, the ASD group exhibited reduced FC between the left dorsal premotor cortex and the left temporal parietal junction for the effect of task difficulty. Furthermore, in the ASD group, such reduced FC was negatively correlated with their Autism-Spectrum Quotient scores. In summary, this study shows that individuals with ASD had abnormal frontal lateralization and dorsal stream for speech processing. While their abnormal frontal lateralization might be related to speech intelligibility, their reduced functional connectivity in the dorsal stream for speech processing was related to their symptom severity.
Speaker: Ryan Horsfall  
Co-authors: Sophie Wuerger, Georg Meyer
Title: Reductions in temporal binding window size following audio-visual training are not generalised across visual intensities
Affiliations: University of Liverpool, UK
Abstract: The temporal binding window (TBW) represents the range of offsets in which two stimuli will be combined into a single percept. This window is wider in those with developmental disorders, causing deficiencies in speech perception (Stevenson et al., 2017). It is possible to train individuals to reduce the size of this TBW (Powers III, Hillock & 2009) indicating this could be used to improve audiovisual perception in non-typically developing individuals. 32 observers (18-28 y.o.a.) performed a simultaneity judgement task with flash/bleep stimuli (100ms) with interleaved trials of varying stimulus onset asynchronies (-300ms AV to +300 VA) and two flash intensities (0.02 cd/m2 and 1.34cd/m2). Individuals then completed a training phase at one of the two flash intensity levels where they were given feedback following every response. They were then retested. Whilst no significant effects were found for the dim training group, the results show that training individuals with the bright intensity stimuli caused a significant reduction (>225ms) in TBW size for the bright stimuli. Yet this improvement did not transfer to dim stimuli. The results highlight the lack of performance transfer following training, and cast doubt over the utility of training non-typically developing individuals to reduce their TBW size.
Speaker: Maria Czarnecka 1  
Co-authors: Katarzyna Rączy 1, Jakub Szewczyk 1, Małgorzata Paplińska 2, Guido Hesselmann 3, André Knops 4, Marcin Szwed 1
Title: "High MVPA decoding accuracy and tactile-to-visual priming for tactile Braille numbers in the Intraparietal Sulcus"
Affiliations: 1 Department of Psychology, Jagiellonian University, Krakow, Poland 2 The Maria Grzegorzewska University, Warsaw, Poland 3 Department of General and Biological Psychology, Psychologische Hochschule Berlin, Berlin, Germany 4 Laboratory for the Psychology of Child Development and Education, University Paris Descartes, Paris, France
Abstract: The Intraparietal Sulcus (IPS) plays a key role in processing abstract numbers. According to the "triple-code theory" (Dehaene 1992), it contains a modality independent magnitude code, which co-exists with modality-specific codes in the visual Arabic and auditory verbal domain. Using behavioural, fMRI priming, and MVPA decoding techniques, we investigated how numbers are coded in the tactile domain (Braille). A unique group of 25 sighted Braille readers underwent a 9-month general Braille course and a 3-week number recognition course, reaching an intermediate level of Braille fluency similar to second-grade children. In priming experiments (Rączy et al, under review) subjects performed a primed naming task. The primes were either tactile Braille digits or number words. The targets were visually presented Arabic digits. Analyses revealed a V-shaped priming function for both tactile-to-visual and visual-to-visual formats, limited to identity priming (e.g. 2 primes 2). This type of priming (without proximity priming, e.g. 3 to 2) suggests a shared modality- independent phonological code. fMRI priming revealed a robust repetition suppression within the left IPS suggesting participants’ engagement in the direct grapheme-phoneme conversion. IPS activations may thus reflect not an abstract semantic code, but a phonological code. In an fMRI multi-voxel pattern analysis (MVPA) experiment, the same participants were presented with numerosities in tactile abstract, visual: abstract and non-abstract formats. Similar to previous studies (e.g. Bulthé et al., 2014), abstract visual numbers had low decoding accuracy, which was previously interpreted as symbolic numbers being mapped onto a subset of neurons tuned for a corresponding non-symbolic representation. This interpretation suggested non-symbolic numbers being broader represented in neuronal populations therefore easier to decode. Here, we found that tactile abstract numbers were robustly decodable in parietal regions, This suggest that low accuracy for visual stimuli was due to their visual nature and/or overtraining, not to their abstract nature itself.
Speaker: Martin Lindenbeck  
Co-authors: Piotr Majdak, Bernhard Laback
Title of the talk: "Stimulation Paradigms in Electric Hearing: Past, Present, and Future"
Affiliations: Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna
Abstract: Cochlear implants (CIs) are an increasingly common treatment for profound hearing loss or deafness. CI electrode arrays directly stimulate the auditory nerve, thus, bypassing peripheral auditory processing. The acoustic signal is converted by a sound processor into electric pulse sequences and so-called stimulation paradigms define the method of conversion, determining the acoustic cues represented in electric hearing. Structural limitations of CIs have required past and present stimulation paradigms to focus on certain cues, in particular speech, while implicitly discarding others, in particular interaural time differences (ITDs) and temporal pitch. In this talk, we will introduce reasons for limitations of current stimulation paradigms and provide an outlook on how encoding of spatial and pitch cues can be improved in future paradigms. Based on preceding psychophysical work, we will discuss a novel approach that aims at enabling improved ITD and temporal-pitch perception by means of strategically inserting extra pulses to the standard electric pulse sequences.
Author: Ali Yoonessi 1  
Co-authors: Elaheh Shahmiri 2, Khazar Ahmadi 3
Title: "Visual and Auditory Impairments as Early Biomarkers for Clinical Applications"
Affiliations: 1 Neuroscience Department, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences 2 Researcher, Baycrest, University of Toronto 3 Ph.D. candidate, Otto-von-Guericke University
Abstract: Evidence has accrued that the visual and auditory systems show characteristic patterns of malfunctions in multiple sclerosis, schizophrenia, Parkinson’s and Alzheimer’s diseases, and several other disorders. In addition, some studies suggest that these sensory impairments start early in the pathological process of the disease, and can be used as an early biomarker. This is of particular importance for diseases such as Alzheimer’s disease where treatments can only decelerate the progression. We measured thresholds of Auditory Steady-State Response (ASSR) in 12 Alzheimer’s Disease (AD), 15 Mild Cognitive Impairment (MCI), and 15 control subjects. Three carrier frequencies of 500, 1000, 2000Hz with two modulation frequencies of 40 and 80 Hz for both ears were used. Significant differences between the normal and AD group was observed in the ASSR thresholds difference in 2 modulation rates (40 and 80 Hz) at all three frequencies in both ears. In addition, control and MCI group were significantly different in 2000 Hz thresholds, and AD and MCI groups in 500 Hz in both modulation rates. We have also previously shown that dyslexic patients are impaired in visual processing compared to control groups. The recent finding supports the clinical importance of auditory and visual processing. Inexpensive screening methods can potentially be developed for early diagnosis and monitoring of various disorders.
Author: Ondrej Spišák 1  
Co-authors: René Šebeňa 1, Peter Lokša 1, Maike Ferber 2, Bernhard Laback 2, Norbert Kopčo 1
Title: "Vision-based Adaptation of the Frequency-dependent Weighting of the Localization Cues"
Affiliations: 1 Institute of Computer Science P.J. Šafárik University, Šrobárova 2, 041 80 Košice, Slovakia 2 Acoustic Research Institute, Austrian Academy of Science, Wohllebengasse 12-14, 1040 Vienna, Austria
Abstract: Which cues the auditory system uses to determine the sound source location largely depends on the sound’s frequency content. For low-frequency (LF) narrowband sounds, the interaural time difference (ITD) is the dominant cue, while for high-frequency (HF) narrowband sounds, the interaural level difference (ILD) dominates. For mid-frequency narrowband sounds, ITD and ILD both contribute to varying degrees to determining the perceived location. We performed an experiment in which we tested whether it is possible to change spectral weighting of either the HF or the LF components of broadband stimuli by visually guided training in separate subject groups. We also tested whether this reweighting would generalize to a change in the ITD/ILD weighting for mid-frequency sounds. In the subject group trained on HF this training resulted in an increase in the HF weight, but no effect was found in the LF group. However, the change in spectral weighting of the HF group did not generalize to an increase in the relative weighting of the ILD cue for mid-frequency sounds. Thus, the reweighting appears to be only spectral, but not binaural-cue specific.
Author:  Eleni Vlahou  1,2,3  
Co-authors: Aaron Seitz 2, Norbert Kopco 1
Title: "Gamifying Perceptual Training in Complex Listening Environments"
Affiliations: 1 Institute of Computer Science, P. J. Šafárik University 2 University of California, Riverside 3 University of Thessaly, Greece
Abstract: Laboratory-based auditory and speech training programs typically focus on overt categorization and discrimination tasks, with explicit performance feedback delivered upon each trial. These training conditions do not resemble realistic listening experiences and, potentially, do not to tap upon the same mechanisms that are activated during perceptual learning outside the lab. In an attempt to employ more ecologically valid training environments, recent studies have used engaging videogames, in which learning emerges in an unsupervised manner, by the interplay between stimulus statistics, task demands and reward-based schedules. Following this approach, here we demonstrate a preliminary version of two first-person controller videogame prototypes that aim to improve (a) nonnative perception of difficult phonetic categories (“Alien Shooter” game) and (b) native speech perception in challenging listening environments, i.e., in simulated crowded scenes with multiple speakers, noise and reverberation (“Coctail Party” game). The speech stimuli are nonsense syllables from native and nonnative speakers, presented in quiet and in simulated rooms with varying levels of reverberation. During training, there are several online measures of performance on game metrics (e.g., score, level, reaction time). Our central goal is to deliver immersive training environments that promote incidental learning of the trained material and generalization of learning to untrained contexts.
Author: Rene Sebena  
Co-authors: Norbert Kopco
Title: "Electrophysiological correlates of attentional cueing and auditory spatial discrimination"
Affiliations: Department of Psychology, Faculty of Arts, PJ Safarik University, Moyzesova 9, 040 59 Košice, Institute of Computer Science, Faculty of Science, PJ Šafárik University, Jesenná 5, 040 01 Košice
Abstract: We performed behavioral and EEG experiments to examine whether directing automatic auditory spatial attention affects listeners’ performance and how neuronal activity changes during task performance (Kopco, N., Sebena, R., Hrebenarova, B., Ahveninen, J., Best, V., & Shinn-Cunningham, B., 2018, “Visual vs. auditory attentional cueing and auditory spatial discrimination”, Cognitive Neuroscience Society). We found better performance following a visual cue vs. auditory cue, mainly driven by a decrease in performance when auditory cue was presented from an incongruent location. Analysis of target-elicited ERPs showed that amplitudes of the late auditory components covary with the observed behavioral performance. The current study examines the ERPs elicited by the cue sounds. Specifically, it focuses on two components, an early N1 and a late Auditory-evoked Contralateral Occipital Positivity (ACOP). First, it evaluates whether auditory-evoked N1 component elicited by a lateralized cue is larger than N1 elicited by a frontal cue over the hemisphere contralateral to the sound location, while this component is expected not to be modulated by attentional. Second, it examines whether ACOP, previously associated with automatic attentional processes, can predict the correctness of the behavioral responses on a trial-by-trial basis, while it is expected that the signal is not modulated by cue location. Preliminary results suggest that N1 activation follows the predicted behavior, while ACOP does not, possibly due to fact that the cues were not always lateral in this study, as was the case in the previous ACOP-related studies.
Author: Gabriela Andrejková   
Co-authors: Norbert Kopčo
Title: "Modeling the temporal profile of contextual plasticity"
Affiliations: Perception and Cognition Laboratory, Institute of Computer Science, P. J. Šafárik University in Košice
Abstract: Contextual plasticity (CP; Kopčo et al., 2007) is a form of spatial auditory plasticity observed in localization experiments in which distractor-target click pairs with a fixed distractor location (the context) are interleaved with target-alone trials. CP is observed as biases in localization of the target-alone clicks of up to 10° in the direction away from the distractors (which not presented on these trials). This adaptation is on the time scale of seconds to minutes. Here we present and analyze the build-up of CP using linear and exponential models. The models are fitted to data in which distractor location (frontal vs. lateral), context distractor type (single click vs. multiple clicks), target location (near vs. far from distractor), and environment (anechoic vs. reverberant) are manipulated. The linear models describe the data as a combination as a fast onset adaptation followed by a slow drift in responses. The modeling results show that the contextual plasticity depends on all the evaluated factors, and that the fast and slow components are affected differently by the factors. Thus, contextual plasticity is likely a result of a combination of multiple adaptive processes on different temporal scales.
Author: Keerthi Doreswamy 1, 2  
Co-authors: Jyrki Ahveninen 2, Zoltan Szoplak 1, Norbert Kopčo 1,2
Title: "DRR-ILD Cues Weighting in Auditory Distance Perception"
Affiliations: Institute of Computer Science, P. J. Šafárik University, Košice, 04001, Slovakia 2 Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital, Charlestown MA 02129
Abstract:
Author: Julie Kirwan  
Co-authors: Julia Rehmann 3, Peter Derleth 3, Anita Wagner 1,2, Deniz Baskent 1,2
Title: Pupillary Correlates of Auditory Emotion Recognition in Hearing-Impaired Listeners
Affiliations: 1 University Medical CenterGroningen, Department of Otorhinolaryngology, Groningen, Netherlands 2 University of Groningen, Behavioural and Cognitive Neurosciences, Groningen, Netherlands 3 Sonova AG, Laubisruetistrasse 28, 8712 Staefa, Switzerland
Abstract: Hearing-impaired (HI) individuals areshown to perform worse in auditory emotion recognition tasks compared to normal hearing individuals. It is still unclear if this is due to processing at lowauditory levels or to categorisation of emotions that are involved in anexperimental task (Picou et al., 2018). An index of emotion recognition can be observed in pupil dilations, which have recently been shown to dilate more for emotionally meaningful speech in comparison to emotionally neutral speech (Jürgens, Fischer and Schacht, 2018). We fitted 8 older HI participants, who had moderate to severe sloping high-frequency hearing loss, with frequency lowering enabled hearing aids for an acclimatisation period of 3-6 weeks. We recorded their pupil dilations in response to emotional speech with and without frequency lowering, during a passive-listening condition, bothbefore and after the acclimatisation period. We also recorded their pupil dilations during an active-listening condition, which included a behavioural emotion identification task, after the acclimatisation period. We present here insights into the pupillary correlates of vocal emotion recognition in the HI population and the impact of frequency lowering and the cognitive involvement elicited by the experimental situation on pupil dilation and emotion recognition capabilities in this population.
Author: Maksymilian Korczyk  
Co-authors: Maria Zimmermann 1, Łukasz Bola1 2, Marcin Szwed 1
Title: "Musicians’ superior auditory and visual rhythm discrimination is not related to cross-modal neuroplasticity in auditory cortex"
Affiliations: 1 Department of Psychology, Jagiellonian University, 30-060 Krakow, Poland 2 Cognitive Neuropsychology Laboratory, Harvard University, Cambridge, MA 02138, USA
Abstract: Cross-modal brain reorganization is possible not only following sensory deprivation (e.g. deafness) but also after intensive training (e.g. pianists), and can lead to superior sensory processing. Congenitally deaf recruit their auditory cortex for visual rhythm processing (Bola et al., PNAS, 2017). We examined whether similar cross-modal plasticity could be observed in expert musicians. 17 professional pianists and 20 non-musicians participated in an fMRI study during which they discriminated between sequences (rhythms) presented in visual (flashes) or auditory (beeps) modalities. In the control condition, the same flashes/beeps were presented at a constant pace. In an additional condition participants were asked to imagine rhythms. Musicians performed both visual and auditory rhythmical tasks better than non-musicians. fMRI revealed that compared to control condition, the visual task recruited the right-hemisphere auditory cortex in musicians. However, a weaker but similar activation was also observed in non-musicians for the same contrast. Comparison of the two groups revealed no significant between-group effects in the auditory cortex, only an increased activation in the right Angular Gyrus for musicians vs. non-musicians. We conclude that the musicians’ superior rhythm discrimination is not related to cross-modal neuroplasticity in auditory cortex, but most likely is related to plasticity of higher cognitive functions. References : Bola, Ł., Zimmermann, M., Mostowski, P., Jednoróg, K., Marchewka, A., Rutkowski, P., Szwed, M. (2017). Task-specific reorganization of the auditory cortex in deaf humans. Proceedings of the National Academy of Sciences USA, 14, 600-609.
Author: Shiran Koifman  
Co-authors: Stuart Rosen
Title: "Switching attention and integration of binaural information: effects of masker types,binaural listening, and speech materialon the perception of alternated and interrupted speech"
Affiliations: University College London, UK
Abstract: Over several studies I investigated the general utility of a task shown to be highly sensitive to aging for speech maskers when compared with a standard speech in noise task. A masker is interrupted and alternated between the ears out-of-phase with an interrupted target speech, resulting in alternated segments of both target and masker signals between the two ears, with only one stimulus present in each ear at any given time. This task appears to demand higher-level cognitive aspects of listening not probedby simpler tasks through necessitating the ability to switch attention and integrate short-term auditory information between the two ears. To examine the effect of the masker type,listeners were presented with simple ‘everyday’ sentences in three types of maskers: unrelated connected speech, and two non-speech maskers which were extracted from the original speech maskers, with high or low “speech-like” characteristics. Binaural advantage was examined by comparing performance in two listening configurations: (1) binaural, where stimuli are fully preserved when the signals’ switched segments from both ears are combined, (2) monaural configuration, whereonly the information in one ear is presented. Lastly, the influence of speech material was explored by comparing performance with CRM-like sentences.A fuller understanding of the abilities exploited by this task are useful in helping to disentangle the reasons why different groups of people experience difficulty in listening in noisy situations. This work was supported by Action on Hearing Loss, UK.
Author: Peter Loksa  
Co-authors: Norbert Kopco
Title: "Modeling the mixed reference frame of the ventriloquism aftereffect"
Affiliations: Safarik University Kosice
Abstract: Ventriloquism aftereffect (VA) is observed as a shift in the perceived locations of auditory stimuli, induced by repeated presentation of audiovisual signals with incongruent locations of auditory and visual components. Since the two modalities use a different reference frame (RF), audition is head-centered (HC) while vision is eye-centered (EC), the representations have to be aligned. A previous study examining RF of VA found inconsistent results: the RF was a mixture of HC and EC for VA induced in the center of the audiovisual field, while it was predominantly HC for VA induced in the periphery [Lin et al., JASA 121, 3095, 2007]. In addition, the study found an adaptation in the auditory space representation even for congruent AV stimuli in the periphery. Here, a computational model examines the origins of these effects. The model assumes that multiple stages of processing interact: 1) the stage of auditory spatial representation (HC), 2) the stage of saccadic eye responses (EC), and 3) some stage at which the representation is mixed (HC+EC). Observed results are most consistent with a suggestion that the neural representation underlying spatial auditory plasticity incorporates both HC and EC auditory information, possibly at different processing stages.
Author: Timo Oes  
Co-authors: Marc O. Ernst, Heiko Neumann
Title: "Monoaural and Binaural Sound Source Localization in the Median Plane"
Affiliations: Ulm University
Abstract: To localize the vertical elevation of a sound source, the auditory system compares the spectrum ofa perceived sound with a previously learned map of elevation spectra. Whether this process is separately initiated for the signal of the left and right ear, respectively, or if it is already binaural at the stage of map formation is yet unresolved.We present a model of localizing the sound source elevation. CIPIC database HRTFs are used to generate a binaural data set of various sound types and elevations. Based on such data an averaged map of sound spectra is constructed by filtering and integrating the binaural input signals. Monoaural and binaural input signals can then be localized by searching for the elevation with maximum correlation between input and map.Experiments with binaural inputs show that our model accurately localizes sound types of different spectra. In the case of monoaural signals the localization fails for unfamiliar inputs. However, if a priori information of a sound type specific spectral mean augments the monoaural signals,localization performance is restored. We suggest that vertical sound source localization is fundamentally binaural but can cope with monoaural inputs if sound model information generated from previous perceptions is added.
Acknowledgements: Founded by the Baden-Württemberg Stiftung
Author: Henri Pöntynen  
Co-authors: Nelli Salminen
Title: "Using electroencephalography to characterize binaural processing of random chord stereograms in the human brain"
Affiliations: Aalto Acoustics Lab, Department of Signal Processing and Acoustics, Aalto University, Espoo, Finland
Abstract: Random chord stereograms are a class of binaural stimuli that exploit spectrotemporal variations in the interaural envelope correlation of noise-like sounds with diotic fine-structure. These stimuli evoke salient, binaurally derived auditory effects that are not perceivable under monaural listening. Here, our aim was to assess the usability of these stimuli in studying binaural processing in the human brain. To this end, we recorded EEG responses to RCS-stimulus variants from 12 normal-hearing human subjects. The stimuli consisted of a 3-s noise segment lacking interaural envelope correlation followed by another 3-s segment with periodic interaural envelope correlation manipulations. The envelope correlations were varied between 0 and 1 across the entire frequency range of the stimuli (0.1 - 10 kHz) at rates of 3 and 5 Hz. Perceptually, these variations resulted in a salient beating effect at the two ears. In addition, we measured responses to a ripple stimulus where interaural envelope correlation was manipulated in shifting frequency bands according to a 3 Hz spectrotemporal ripple. This induced a percept of a descending spectral ripple in the noise stimulus. Average event-related potentials and inter-trial phase coherence analyses showed that EEG responses at the vertex electrode (Cz) synchronized to the switching rate between 0 and 1 interaural envelope correlation. For the ripple stimuli, the transition from noise to the segment containing the correlation ripple induced a cN1-cP2 complex in the EEG response but the following steady-state response did not synchronize to the repetition rate of the ripple.
Author: Maria Zimmermann  
Co-authors: Ł. Bola 2, K. Jednorog 3, A. Marchewka 3, M. Szwed 1
Title: "Plasticity in the auditory cortex of the deaf: retaining task-specific purposes or pluripotential acquisition of a new attentional area?"
Affiliations: 1 Jagiellonian University, Cracow, Poland, 2 Harvard University Cambridge, Massachusetts, USA, 3 Nencki Institute of Experimental Biology, Polish Academy of Science, Warsaw, Poland
Abstract: Previous studies (e.g. Bola et al., PNAS 2017) suggest that the deafs’ auditory cortex preserves its taskspecific function (i.e. rhythm processing) despite switching to a different sensory modality (visual). An alternative possibility is, however, that visual activations in auditory cortex indicate that it acquires, in fact, a new cognitive function - attention. To distinguish between these two hypotheses, we performed a pilot fMRI study on three congenitally deaf participants, with four different visual tasks: a luminance discrimination task with or without temporal content, faces/houses recognition task, spatial pattern (checkerboard) image discrimination task, and temporal/spatial sequences comparison. We found that only spatial pattern recognition, which had a very low attentional load, did not activate the auditory cortex. All three remaining tasks activated very similar auditory areas (right posterior STG). Our pilot suggests that the auditory cortex in the deaf may not retain its task-specific function but become a secondary attentional area.
Author: Lenka Štěpánková 1, 3  
Co-authors: prof. PhDr. Tomáš Urbánek 1, 2
Title: "Hidden figures as a test of spatial cognitive ability"
Affiliations: 1 Masaryk University, Faculty of Arts, Department of Psychology 2 The Institute of Psychology of Academy of Science of Czech Republic 3 Masaryk University, Faculty of Social Studies, Department of Psychology
Abstract: The reported results derive from the dissertation thesis of the author, where the main goal was to find out, if the test based on original Herman Witkin´s work on cognitive style of field dependence/independence (FD/FI) and Witkin´s Embedded Figures test (EFT) could be a test of spatial cognitive ability. The test battery consisted of Hidden figures test, which was based on EFT, Mental rotation test, which is considered to be a standardized test of spatial ability and a control perception task of color categorization. In the Hidden figures test, a participant is presented with a simple figure and a complex design (containing the simple figure), the task is to find the figure in the complex design as fast as possible. The results showed a significant correlation between the Mental rotation test and Hidden figures test, also the correlation between these two tests and a control perception task was not significant. The hypotheses were confirmed and we conclude, that the Hidden figures test can be used as a test of spatial cognitive ability. These results also suggest that the FD/FI cognitive style is not in fact a cognitive style, but a cognitive ability.

Contact


PI Office:
Room: 2.16T,
Tel: +421 55 234 2450

Behaviour Lab:
Room: 4.03T,
Tel: +421 55 234 2461

EEG lab (as of Sept 2020 moved to Psychology Dept):
Psychology Dept
Room 13
Plato Building
Moyzesova 9
P. J. Šafárik University
Košice
Slovakia

US Google voice number
:
+1 617 299 1253
Fax: +1 484 727 9884

Map