Spatial heterogeneity of vase-face perception

Rubin’s famous vase-face illusion is an ambiguous image that can be perceived either as a vase or as two faces in profile. Dr Nonie Finlayson and Victorita Neacsu published a new study investigating how this illusion depends on where in your visual field the image appears. Previous research by our lab and others had already shown that how visual images appear can vary dramatically between different locations in a way that is unique to each person – so that this constitutes something like a “perceptual fingerprint”. In Victorita’s MSc project, we showed that there is similar variability for ambiguous vase-face images: for a given location, a person may be more likely to report seeing faces while for another location they tend to see faces. In follow-up experiments, Nonie tested what potential mechanism could underlie this variability. Our findings suggest that this is fairly basic effect, related to how your vision ability differs between locations, rather than being due to complex functions like face perception, or cognitive abilities like interpreting the whole from its parts.

Finlayson, NJ, Neacsu, V, & Schwarzkopf, DS (2020). Spatial heterogeneity in bistable figure-ground perception. i-Perception 11(5): 1-16.

SamSrf 7 released

The latest major version update of our pRF mapping toolbox SamSrf (“Seriously Annoying Matlab Surfer” if you must ask…) has been released. In this update, we improve a number of aspects about the model fitting procedure to give more precise pRF estimates, added support for parallel computing, and added functions allowing you to simulate and validate pRF models. You can find SamSrf on OSF:

https://osf.io/2rgsm/

While we believe that the model fitting algorithm should be stable, with all major updates there is a chance of instabilities so we are particularly interested in hearing from people reanalysing an older data set with the new version. So please get in touch.

 

Cortical representation of perceptual grouping

Susanne Stoll published her first article in which she investigated how perceptual grouping of moving stimuli is represented in human visual cortex. We used a novel searchlight method for projecting brain activity back into the visual space. Essentially, we use population receptive fields measured independently via retinotopic models as encoding models to infer which parts of the visual field produce a neural response. There have been a number of studies using similar approaches. What is different about our approach is that it produces comparably clear reconstructions whilst being actually quite straightforward.

The first experiment in this study investigated a bistable illusion that can either be perceived in a local and global state, and conscious experience constantly fluctuates between these two. That allows us to disentangle the neural signature of actual perceptual grouping from the underlying physical stimulus that presumably remains constant. We replicated previous findings that early visual cortex (especially V1) shows suppressed responses to the global compared to the local state. Higher, object-sensitive regions on the other hand showed a stronger response to the global stimulus. Critically, the suppression in early visual cortex was widespread. In follow-up experiments we then found that for non-ambiguous motion stimuli designed to broadly mimic the grouping conditions of our bistable stimuli, we found suppression all over the visual cortex, including higher areas. This demonstrates that this suppression is not specifically related to perceptual grouping of local features into global objects. Moreover, the suppression is probably not universal across the cortex but is after all diffusely localised to the general location of the stimuli.

Diamond-V1

Stoll, S, Finlayson, NJ, & Schwarzkopf, DS (2020). Topographic Signatures of Global Object Perception in Human Visual Cortex. NeuroImage 220: 116926.

Comparing pRFs between MRI scanners

Around the time Sam moved to New Zealand, he and Dr Catherine Morgan conducted a little experiment. Seeing that we knew three individuals who would be visiting London as well as Auckland within a few months, we decided to scan them twice on a pRF mapping paradigm. The first scan was on the 1.5T scanner at BUCNI in London, the second scan on a 3T scanner at CAMRI in Auckland. Naturally, there are quite a few differences in the scanning parameters between these site and the sequences used on two magnetic field strengths. Our aim was not to test the specific effect of magnetic field strength. Rather we sought to compare pRF estimates on two different sites with different scanners, using the standard parameters at each site. We did however keep things constant, such as the voxel size, temporal resolution, and the visual field of view of the participant in the scanner.

The findings, which are now published in F1000Research, suggest that the general retinotopic map organisation, as well as pRF size and cortical magnification estimates, are all pretty similar across sites. This is in spite of the fact that the signal-to-noise ratio of the 3T scanner is undeniably superior to the 1.5T. This is an important finding because it suggests we can directly compare pRF mapping data between different sites. In turn, this further opens up the possibility of conducting multi-site collaborations. Due to the small sample size in this study, we of course cannot rule out very subtle differences that we were simply not able to detect here. Most pRF or retinotopic mapping studies tend to focus on single participants as case studies, so it is actually crucial that we observe identical results from the same participant at different sites.

ScannerComparisonMaps

Morgan, C, & Schwarzkopf, DS (2020). Comparison of human population receptive field estimates between scanners and the effect of temporal filtering. F1000Research 8: 1681.

Mapping sequences bias pRF estimates

Dr Elisa Infanti has published the work she did during her postdoc in the lab. In this, we set out to ask whether pRF estimates in human visual cortex depend on expectation, in particular the predictability of the mapping sequence used. Most visual mapping studies use ordered stimulus designs, such as rotating wedges or bars sweeping across the visual field in a regular fashion. Some previous work has compared this to randomised designs and often this results in some differences in the parameter estimates. However, nobody has explicitly looked at whether the predictability of the stimulus matters.

She manipulated predictability in various ways: training participants to recognise a regular sequence of non-adjacent stimulus locations, or simply by cueing them to the location of the next stimulus. She then compared these designs to traditional orderly sequences or to completely random designs. While there are considerable differences in pRF size estimates between random and orderly sequences, our results suggest that predictability does not affect pRF estimates. Interestingly, whether ordered sequences yield large or smaller pRF sizes than random ones, depends on other parameters, such as the cycle duration and/or the stimulus width.

MappingSequences

Infanti, I, & Schwarzkopf, DS (2020). Mapping sequences can bias population receptive field estimates. NeuroImage 116636 Early online.

 

More on size perception biases

Sam published a study investigating the spatial heterogeneity of size perception biases as estimated by our MAPS procedure used in several of our previous studies. Specifically, in a first experiment (which was conducted by research project student Samuel Spence a few years ago) we tested the effect of stimulus duration on perceptual biases and found that biases are stable for stimuli lasting up to 1 second. This is in spite of the fact that observers’ ability to discriminate the stimuli unsurprisingly increases with duration and that many observers also find it harder to maintain accurate fixation for longer stimuli. The biases in the appearance of stimuli therefore seems to be pretty fundamental.

In a second experiment, I then compared these perceptual biases between the visual field meridians. Our previous research indicated that perceptual biases were more pronounced at locations encoded by larger population receptive fields in visual cortex. In turn Silva and colleagues have shown that population receptive fields are larger on the vertical than the horizontal meridian. Putting those findings together, I hypothesised that the perceptual biases measured on the vertical meridian should be more pronounced than on the horizontal meridian, and the experiment confirmed this prediction.

Meridians-MAPS

Schwarzkopf, DS (2019). Size perception biases are temporally stable and vary consistently between visual field meridians. i-Perception 10(5): 1–9.

Visual field maps from motion-defined stimuli

Dr Anna Hughes, a former colleague of ours in UCL who has since moved on to greener pastures, published a study in which we use stimuli defined by motion for pRF mapping. Most visual field mapping is done with solid high-contrast stimuli (moving bars or rotating wedges, etc.). Here instead we used random fields of moving dots and defined the stimulus location by means of the dot properties. The aim was to test whether stimuli defined for instance by the coherence of motion would selectively produce responses in higher visual areas that are believed to be involved in global motion processing. While our results were indeed consistent with that, we also observed similar results when using control stimuli not defined by motion. Our findings therefore instead suggest that what determines activation in visual field mapping studies is not necessarily the stimulus feature – in fact, it could simply be the signal-to-noise ratio of the mapping signal.

Motion-Maps

Hughes, AE, Greenwood, JA, Finlayson, NJ, & Schwarzkopf, DS (2019). Population receptive field estimate for motion-defined stimuli. NeuroImage In press.

 

 

Individual differences in gaze behaviour

Dr Benjamin de Haas published a study in which he and Alexios Iakovidis measured the gaze behaviour of participants (as well as reanalysed some already published data collected by others) while they looked at photographs of real-world scenes and events. They found that people vary considerably but consistently in terms of which kinds of objects they look at (e.g. faces, touchable objects, etc). Since finishing his fellowship in the SamPenDu lab he has moved to Germany where he continued this work with Karl Gegenfurtner and collected a third data set to replicate the results he found in previously. Since gaze behaviour determines what visual information is foveated and processed at higher resolution, this variability in gaze behaviour could have important consequences on what information is prioritised and the functional organisation of the visual systems of different observers. Our present results already hint at that being the case at least as far as face recognition is concerned.deHaasPnas

de Haas, B, Iakovidis, AL, Schwarzkopf, DS, & Gegenfurtner, KR (2019). Individual differences in visual salience vary along semantic dimensions. Proceedings of the National Academy of Sciences of the USA 

Our 1st (Annual?) Global Lab Meeting

The entire lab, except for Dr Elisa Infanti who for practical reasons already visited New Zealand in 2018, got together for the first time in Auckland. We were joined by Dr David Carmel of Victoria University of Wellington. We started off the day with short presentations by all the students, followed by longer discussions. After that we went to have lunch in a café in the heights of the Waitakere Ranges. OLYMPUS DIGITAL CAMERA

Then we drove through the clouds out west to Karekare Beach for a walk leaving people to discuss freely (David and Sam talked at great length about replications and open reviewing).

As good vision scientists, we checked out the waterfall there for another real-life demonstration of the motion aftereffect. This successfully replicated the work of Kalpadakis-Smith, Schwarzkopf & Greenwood (2018) at ARVO.

P1170027.JPG
These two people are randomers and probably had no useful insight into the replication crisis or open reviewing whatsoever

To conclude, we all went to Sam’s place to have drinks on the deck until late in the night, where we were joined at some point by a possum visitor (nasty invasive species bent on destroying the local wildlife but still bloody cute).

P1170037.JPG

Again, very sorry Elisa and other postdoc incarnations, Christina, Ben, and Nonie couldn’t join us. But we need to have more gatherings like this in the future!

OLYMPUS DIGITAL CAMERA

Blog at WordPress.com.

Up ↑