Social Perception as Bayesian Hypothesis Testing and Revision

1xBJsGYV_400x400.jpg

We are grateful for the Leverhulme Trust for awarding us £462.995 to investigate how predictions help people make senrse of the behavior of others, and which neuro-cognitive mechanisms underlie these abilities. The work is lead by Patric Bach in Aberdeen, in collaboration with Elsa Fouragnan and Giorgio Ganis in Plymouth and Paul Downing in Bangor.

Dr Katrina McDonough and Dr Michail Niklas works as postdoctoral research fellows on the grant and lead the experimental psychology and neuroimaging research streams, respectively.

Project overview

All human social interactions rely on the ability to “see” meaning and purpose in other people’s behaviour. We seem to just know why our child drags us towards a shop window, why our friend steers clear of the spider, or why our partner hands us a drink after a workout. This research project aims to reveal the neuro-cognitive mechanisms that underpin these fundamental abilities of human social understanding.

The project builds on recently developed theoretical frameworks that cast perception – social and otherwise – as an iterative process of Bayesian hypothesis testing and revision. Unlike previous approaches, these accounts do not assume that observed behaviour is simply matched to its meaning (e.g. that a smile signals happiness). Instead, they argue for a top-down process in which prior knowledge about others – their goals and beliefs – is constantly projected onto their behaviour and shapes how it is perceived. In this way, even ambiguous behaviour becomes imbued with meaning (“He still looks hungry!”). Clearly mismatching behavior, in contrast, will stand out and can trigger revisions of prior assumptions, until they better fit the input (“He’s not eating that? Must be full already.”).

Such frameworks have the potential to provide a unified account of social perception that explains both: how we see meaning in others’ behaviour, and how this knowledge is constantly updated when we watch them act.

Picture1.jpg

In our prior research, we have developed several novel methods that can reveal this predictive shaping of perception. Here, we use these techniques as a starting point to develop a computational model that describes how prior assumptions shape – and shaped by – social perception and identify the neuro-cognitive mechanisms underlying these processes.  

All studies build on our tried-and-tested experimental paradigms that measure the “fusing” of prior assumptions into the perception of others’ behaviour, revealed as a perceptual confirmation bias that distorts even lower-level characteristics of these actions. In these tasks (Fig 1), participants first receive information about the goals of an actor (e.g. they see them near an object and hear them say “I’ll take it!” or “I’ll leave it!”) and then briefly view the onset of an action, which either follows the expectations or not (e.g. they reach for/withdraw from the object). At some point on its course, the hand suddenly disappears and participants report its last seen position, either by pointing on a touch screen, or by comparing it with a probe stimulus that shows the same hand either further along the trajectory or not as far.

The results consistently reveal the expected shaping of perception. Thus, if an actor “wants” an object, they seem to reach further towards it than they really did, and further away if they wanted to avoid it. Further studies have shown that the perceptual biases measured in these different approaches are highly replicable and of large effect size. They occur spontaneously when watching humans act but are not found for inanimate objects. They reflect the confidence in one’s predictions and are found for a range of other expectations. For example, the same reach path appears higher when an obstacle in the way must be avoided than when the path is clear, and hands seem to move closer towards objects that fit the grip size. These biases therefore provide us with a unique window into the predictive shaping of perception and offer precise parametric measures that can be linked to model predictions and neuronal correlates.

Research objectives

Picture2.jpg

The research program proceeds in three independent work packages. All use variants of the above described experimental paradigm. Participants view video-snippets of actions (e.g. reaching for/withdrawing from objects) and report the hand’s last location (on a touch screen or by comparing it to a probe stimulus). By comparing these judgments with what was actually presented, it provides robust quantitative measures of how perception is shaped by different expectations about the action.

 

Work package 1 will use behavioural/computational techniques to test whether (1) varying information about the actor’s mental states affects action perception, whether (2) these biases reflect “illusory” changes to visual perception (rather than mere changes in interpretation or memory), and (3) how visual perception, in turn, affects subsequent mental state attributions.

Work package 2 will combine EEG/ERP and fMRI neuroimaging with computational modelling techniques to (1) track in time how expectations affect perception and are updated by it, and (2) reveal the brain regions underlying these processes.

Picture3.jpg

Work package 3 will use EEG and fMRI multivariate/ machine learning methods to (1) test whether expected actions are encoded in the brain similarly to actions that are actually observed, and (2) whether this similarity enables superposition - and comparison - of expectation and perception.