Petra Vetter
Professeure assistante
Professeur·e assistant·e
Département de psychologie
Rue P.A. de Faucigny 2
1700 Fribourg
Recherche et publications
-
Publications
25 publications
Semantic audio-visual congruence modulates visual sensitivity to biological motion across awareness levels
Cognition (2025) | ArticleDecoding semantic sound categories in early visual cortex
Cerebral Cortex (2025) | ArticleSome Key Ingredients for Becoming a Scientist , dans Women in Science. Experiences of Academics in Switzerland.
Petra Vetter (2025), ISBN: 978-3-8376-7750-8 | Chapitre de livre -
Projets de recherche
10006924 - Auditory spatial attention and eye movement guidance in blindness and its use for sight rehabilitation
Statut: En coursDébut 01.01.2026 Fin 31.12.2028 Financement FNS Voir la fiche du projet Spatial attention is crucial to guide our actions through a buzzing multisensory environment. It allows us to pay attention to a specific location and move our eyes to this location for further exploration. This process has been studied primarily for vision, but the role of other senses, its dependence on vision, and its uses for visual rehabilitation have been underexplored. The main objective of this project is to investigate how spatial attention and eye movements are guided by audition in humans, how these brain mechanisms develop in blindness, and whether these insights can be used to improve sensory aids. Our specific research aims are to investigate: (1) whether shifts of auditory spatial attention are linked to residual eye movement activity in blind individuals; (2) how auditory brain circuits interact with those for spatial attention and eye movement control in blindness compared to sighted participants; and (3) whether spatial attention guidance in blindness can be harnessed to improve sensory aids and sight restoration. For Aim 1, we will use electro-oculography (EOG) and electroencephalography (EEG) to derive the location of spatial attention and eye gaze in sighted participants when eyes are closed (validated with eye tracking during eyes open). These EOG/EEG measures will then allow us to infer the location of spatial attention and eye gaze in blind individuals even when eyes are non-functioning. This EOG measure can then be implemented in visual aids to select spatial attention location (see Aim 3). For Aim 2, functional magnetic resonance imaging (fMRI) together with computational methods will allow us to identify where different auditorily attended locations are represented in the brain. In the sighted, we expect auditory and visual spatial attention and eye movement brain circuits to represent attended locations. In the blind, we expect either similar brain areas to be involved if these mechanisms are preserved despite vision loss, or alternatively, different brain areas may be recruited due to brain plasticity. For Aim 3, we will implement the EOG/EEG measures from Aim 1 in sensory substitution devices and test with computational and behavioral measures whether attentionally selecting specific locations will improve sensory aid use. The results of this research will elucidate whether spatial attention and eye movement guidance and their neural mechanisms are tied to vision or whether they can develop through audition in the absence of sight. Thus, this research will demonstrate important interactions between the senses and their active guidance of attention and eye movements and their dependence on visual experience – i.e. the balance of nature and nurture in their development. This research will also help resolve a crucial problem for sight restoration by allowing attentional guidance of the blind users to select the most relevant part of a scene to be translated by sensory aids, addressing informational overflow and low resolution of aids, and greatly improving technological solutions for millions of visually impaired people worldwide. Mapping Space in the Blind Brain
Statut: En coursDébut 01.02.2024 Fin 31.01.2029 Financement FNS Voir la fiche du projet How does the brain create a representation of the space around us? In the sighted, adjacent spatial locations of the outer world are mapped onto adjacent regions in the brain. However, how are different spatial locations represented when visual input lacks since birth? This project addresses the key conceptual challenge of whether topographic spatial location maps are a universal organisation principle in the human brain, or whether they depend on vision. Recent evidence, including from my own lab, suggests that the “visual” cortex in people blind from birth is similarly organised than in the sighted and used for sound representation (Vetter et al., 2020, Current Biology). The specific challenge is therefore whether “visual” cortices are actually used for external space representation in the blind. The first goal of this project is to elicit representations of different spatial locations via audition and touch and to identify how and where in the brain spatial locations are mapped in the absence of vision. The second goal is to investigate which brain regions are causally involved in the spatial coding of auditory and tactile information. We will use beyond state-of-the-art ultra-high field functional MRI at 7 Tesla and advanced analysis methods to identify spatial location maps probed with novel auditory and tactile spatial localisation tasks in both congenitally blind and blindfolded sighted individuals. We will also use transcranial magnetic stimulation, alone and in combination with fMRI, to identify the causal role of brain areas spatially coding auditory and tactile information. This project is ground-breaking as it will establish whether topographic spatial location mapping is an vision-independent brain organisation principle in the human brain. Understanding how space representation works in blindness can significantly advance the development of visual prostheses and aids for blind and visually impaired individuals. How audition enhances visual perception and guides eye-movements
Statut: En coursDébut 01.09.2020 Fin 30.11.2025 Financement FNS Voir la fiche du projet Background and rationale: To create the visual world that we perceive, the brain uses information not only from the eyes but also from other senses. While there is evidence for top-down information from non-visual brain areas influencing visual processing, it is still unclear what kind of information is communicated top-down, and what function this influence has for actual visual perception and action. In the case of auditory influences to vision, previous multisensory research focussed on the integration of simple auditory and visual signals (beeps and flashes) across time and space. However, beeps and flashes do not carry any ecologically valid information content and thus the influence of actual semantic information content of sounds onto visual perception and action remains unexplored. In everyday life, it is crucial that the content of sound information, e.g. the sound of an approaching car behind us, is matched to the visual environment correctly such that we can see the approaching car quickly and react accordingly. Overall objectives and specific aims: The overall objective of this project is to investigate how information content of natural sounds influences and enhances visual perception and guide actions. Two specific Research Aims will be addressed: Aim 1: Can content-specific sound information guide eye-movements and as such actions and perception? Aim 2: Can content-specific sound information resolve ambiguities in vision and thus enhance visual perception? Methods: Aim 1: We will determine whether eye-movements are guided to visual natural scenes that are suppressed from conscious awareness when participants hear a semantically matched natural sound. Visual and auditory stimuli will be matched such that they are semantically more closely or more distantly related. Aim 2: We will render visual stimuli ambiguous by displaying different stimuli to each eye and determine whether semantically matching sounds can disambiguate visual stimuli and make the matching image visible. In addition, we will use functional MRI and brain decoding techniques to determine whether sound content is represented in early visual at the time the visual stimulus is disambiguated. Expected results and their impact for the field: We expect that sound content guides eye-movements to semantically matching visual scenes even in the absence of awareness of these scenes. We expect the guidance of eye movements to be modulated by the semantic relatedness between sound and image. Furthermore, we expect that sound content resolves visual ambiguities in visual ambiguous situations and that we find evidence that the brain uses this content-specific sound information to disambiguate visual perception. Ultimately, once we have demonstrated how audition can enhance visual perception and guide actions, we can use these insights to develop rehabilitation devices for visually impaired people, to improve sensory substitution devices for the blind and to enhance multi-media environments in which audition and vision have to be combined.