[month] [year]

CogSci 2022 

July 2022

Faculty and students presented the following papers at the 44th Annual conference of the Cognitive Science Society 2022 (CogSci 2022) at Toronto, Canada from 27 – 30 July.

  • Clickbait’s Impact on Visual Attention – An Eye Tracker Study – Vivek Kaushal, Sawar Sagwal, Dr. Kavita Vemuri. Research work as explained by the authors:

In this paper, we have studied the impact of clickbait headlines on the distribution of visual attention on hyperlinked news articles. Visual attention is a driving factor in ad-based revenue models that support online journalism. Importantly, it is also an indicator of cognitive processes involved in reading and comprehension. We hypothesize that articles with clickbait headlines receive lesser visual attention when controlled for articles’ content. This is based on the premise that a significant proportion of clicks on clickbait headlines are driven by readers’ specific epistemic curiosity rather than knowledge acquisition. An eye-tracker setup was used to infer visual attention from the gaze-fixation analysis conducted on data from 60 participants. Our results suggest that clickbait headlines significantly reduce the visual attention on news articles. Though, article content comprehension measured by a recall test was comparable for clickbait and non-clickbait headlines. Our findings add to the discussions on the cognitive attention and the implications of using clickbait headlines for news publishers, newsreaders, and advertising agencies alike.

  • Exploring Empathy and a Range of Emotions Towards Protest Photographs – AadilMehdi J Sanchawala, Adhithya Arun, Rohan Chacko, Rahul Sajnani and Dr. Kavita Vemuri (All the students had equal contribution in this research paper). Research work as explained by the authors:

Images are a powerful medium to induce emotional connection and empathy in people. They are of particular importance in the case of socio-political protests due to their potentially wide reach and the deep symbolism they hold. In this study, we aim to quantify the emotional connect a person experiences with a particular image type in the context of the 2020-21  farmers protest in India.  Our study hypothesises that camera captures with specific visual features -closeup/pan shots, content – the presence of police, gender, and children (or what is termed as ‘optics’)  –  affect a viewer’s empathy and emotional connection with the image and the genesis of the protest itself.  We studied the strength of emotions felt by participants while  viewing the images with and without the feature of interest. We also check for the participants’ initial bias toward the particular protest. The initial dataset comprised 925 images scraped from online sites, with ‘farmers protest’ as the keyword for search. This set was cleaned manually for duplicates and relevancy. Annotation for presence of  – Crowds, Groups, Single Person, Night, Close-Up, Banners, Provocative, Police, Men, Women, Children, Violent, Agitated/Disruptive, Youth, Old reduced the set to 204. Each image was identified to have a set of physical and semantic features. For the experimental paradigm, 40 of these annotated images were selected. Each image was scored on a range of emotions. From the   statistical and dimensionality analyses, the main findings are that the presence of police and the close-up angle of protest images had the highest variation in participants’ emotional responses. 

In contrast, the gender of the people in the images did not have statistically significant effects on the emotional connect of the participants. Importantly, empathic participants respond negatively to images with violence. The preliminary study provides empirical evidence for  the powerful role features in a photograph have in building public opinion, an understudied but critical factor in a world immersed in social media.

  • Psychological Flexibility Determines COVID-19 Peritraumatic Distress and Severity of DepressionRishabh Singhal, Minaxi Goel, and Dr. Priyanka Srivastava. Research work as explained by the authors:

COVID implied social distancing, forced behavioral changes, and economic downturn have been associated with individuals’ poor mental health and wellbeing. Depression and suicide are the highly predicted psychosocial risks caused by the pandemic crises. Early studies evaluating the effect of COVID-19 on psychiatric health have succeeded in developing screening measures. However, they have been limited in understanding its relations with individual psychological flexibility. An Individual’s psychological flexibility not only determines the ability to fight against such adversities on an immediate time scale but also determines the future course of psychiatric treatment. We conducted an online study to examine the relationship between psychological flexibility and risks of depression and their relationship with COVID-19 post-traumatic distress. We used multi-dimensional psychological flexible inventory (MPFI), Beck’s depression inventory (BDI), Covid-19 peritraumatic distress index (CPDI) surveys to measure the above psychological factors. The results are discussed in light of individual psychological flexibility and its association with BDI and CPDI outcomes.

  • Relative Numerical Context Affects Temporal ProcessingAnuj Shukla and Prof. Raju Bapi. Research work as explained by the authors:

A Theory of Magnitude (ATOM) suggests that space, time, and numbers are processed in the brain through a common magnitude system, suggesting that these dimensions potentially interact with one another. Some past work has tested these predictions by simultaneously presenting information from two magnitude systems (e.g., number and time) and asking participants to judge/reproduce the duration information. Several studies have reported that numerical magnitudes biased temporal judgments, i.e., large numerical magnitudes were perceived to last longer than small numerical magnitudes. However, these predictions have been predominantly verified only when the large and small numerical magnitudes were presented in an intermixed fashion where numerical magnitudes varied randomly from trial to trial. To further investigate whether numerical context affects temporal processing in a sub-second timescale, we conducted two experiments (Blocked-magnitude and Mixed-Magnitude) using a temporal bisection paradigm. In the blocked-magnitude, participants were presented with small and large numbers in two separate blocks, whereas in mixed-magnitude, both small and large numbers were presented randomly within the same block. The numbers were presented with varying durations. Participants were asked to judge whether the presented durations were shorter or longer. The results suggest that temporal judgments were affected when small and large numbers were randomly presented in an intermixed manner. However, such effects disappeared when the number magnitudes were presented separately. These results suggest a strong influence of numerical context on sub-second temporal judgments but not numerical magnitudes alone. Therefore, we suggest that the common magnitude system as posited by ATOM might operate only when relative numerical magnitude information (large and small) is available.

  • Cross-view Brain DecodingSubbareddy Oota, Jashn Arora, Manish Gupta, Prof. Bapi Raju. Research work as explained by the authors:

How the brain captures the meaning of linguistic stimuli across multiple views is still a critical open question in neuroscience. Consider three different views of the concept apartment: (1) picture (WP) presented with the target word label, (2) sentence (S) using the target word, and (3) word cloud (WC) containing the target word along with other semantically related words. Unlike previous efforts, which focus only on single view analysis, in this paper, we study the effectiveness of brain decoding in a zero-shot cross-view learning setup. Further, we propose brain decoding in the novel context of cross-view-translation tasks like image captioning (IC), image tagging (IT), keyword extraction (KE), and sentence formation (SF). Using extensive experiments, we demonstrate that cross-view zero-shot brain decoding is practical leading to ∼0.68 average pairwise accuracy across view pairs. Also, the decoded representations are sufficiently detailed to enable high accuracy for cross-view-translation tasks with following pairwise accuracy: IC (78.0), IT (83.0), KE (83.7) and SF (74.5). Analysis of the contribution of different brain networks reveals exciting cognitive insights: (1) A high percentage of visual voxels are involved in image captioning and image tagging tasks, and a high percentage of language voxels are involved in the sentence formation and keyword extraction tasks. (2) Zero-shot accuracy of the model trained on S view and tested on WC view is better than same-view accuracy of the model trained and tested on WC view.

  • Deep Learning for Brain Encoding and DecodingSubba Reddy Oota, Jashn Arora, Manish Gupta, Prof. Bapi Raju, Mariya Toneva. Research work as explained by the authors:

How does the brain represent different modes of information? Can we design a system that can automatically understand what the user is thinking? We can make progress towards answering such questions by studying brain recordings from devices such as functional magnetic resonance imaging (fMRI). The brain encoding problem aims to automatically generate fMRI brain representations given a stimulus. The brain decoding problem is the inverse problem of reconstructing the stimuli given the fMRI brain representation. Both the brain encoding and decoding problems have been studied in detail in the past two decades and the foremost attraction of studying these solutions is that they serve as additional tools for basic research in cognitive science and cognitive neuroscience. Recently, inspired by the effectiveness of deep learning models for natural language processing and computer vision, such models have been applied for neuroscience as well. In this tutorial, we plan to discuss different kinds of stimulus representations, and popular encoding and decoding architectures in detail. The tutorial will provide a working knowledge of the state of the art methods for encoding and decoding, a thorough understanding of the literature, and a better understanding of the benefits and limitations of encoding/decoding with deep learning.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •