May 09, 2025 at 10am (Online Talk)

Title: Visual awareness is more graded for high load on attention compared to low load 

Speaker:  Suraj Kumar (Indian Institute of Technology, Kanpur - India; invited by Aleksandar Bulajic)

Abstract: Whether visual awareness is a discrete or graded phenomenon is still a topic of contention among the theories of consciousness. Furthermore, how different attentional processes influence discreteness/gradedness of awareness is also contended. Recent empirical findings show that different processes associated with attention influence gradedness of awareness differently. For example, Thiruvasagam & Srinivasan (2021) found that awareness is more graded when scope of attention is broad compared to when it is narrow. However, how does other processes associated with attention (e.g., load on attention) influence gradedness of awareness is still not clear. In the current study, we investigated this issue by using Lavie’s load paradigm in a dual task framework in which load was manipulated in a concurrent/primary task while the task of interest was a secondary task. We used letter identification task (to identify X or N; presented among distractors) for the primary task and scene identification task (indoor/outdoor) for the secondary task. The contrast of the scene images presented for the secondary task was varied to nine levels to obtain a psychometric curve and extract the slope and threshold parameters for both objective performance (accuracy) as well as subjective clarity ratings of the scenes. We hypothesized more graded awareness for high load condition compared to low load condition since less processing resources are available during high load due to its engagement in the primary task. Thirty-three participants (mean age: 24.21 years) participated in the experiment. Similar to previous studies available in the literature, we used slope parameter to infer for gradedness of awareness. Results showed higher slope values for high load condition compared to low-load condition for both objective performance as well as subjective clarity ratings of the scenes. From these results, we conclude that visual awareness for scene perception is more graded when load on attention is high compared to when it is low. These findings support the proposals of the theories of consciousness which postulate awareness to be graded and dependent on different attentional processes. Furthermore, these findings put challenge to the proposals existing in the literature which postulate gist perception to be attention-free. 

 

Zoom Link: uni-kl-de.zoom-x.de/j/67529416901

May 16, 2025 at 10:15am 

Title: Managing Expectations of University students in Human-Robot Interaction: Integrating Open-Source Generative AI on a Highly Anthropomorphic Humanoid

Speaker: Ashita Ashok (Robotics Research Lab, RPTU Kaiserslautern)

Abstract: This talk explores the integration of open-source generative AI models, including large language models (LLMs), visual question answering (VQA) systems, and speech recognition libraries, onto the highly anthropomorphic, upper-torso humanoid robot Ameca. Designed as a social robot for university settings, Ameca’s humanlike appearance, coupled with advanced AI capabilities, aims to support interactive conversations with students in a university setting. However, user studies reveal a striking expectation mismatch: while some students perceive the robot as conscious or highly intelligent, others quickly deem it unintelligent when it makes typical actuator or AI errors, both reactions linked to the uncanny valley phenomenon. This discrepancy hinders effective interaction and evaluation of developed systems and raises questions about whether highly humanoid robots are needed in all human-robot interaction (HRI) contexts. These reactions further highlight the risks of deploying such humanlike embodiments without clear communication of their limitations. The thesis argues that although anthropomorphism can foster engagement, it also amplifies misunderstanding, especially when the robot’s appearance exceeds its actual capabilities. Two strategies are proposed to mitigate this mismatch: (1) interactive teaching, where users teach the robot as a tutee, leading to recalibrated expectations; and (2) use in foreign language learning contexts, where the robot’s role as a non-judgmental peer supports speaking practice. The work underscores the need for transparency in communicating the role of generative AI, model limitations, and biases to ensure ethical and trustworthy HRI. Ultimately, we aim to provide guidelines for designing socially acceptable humanoid robots, emphasizing better alignment of appearance with capabilities to improve user trust and engagement.

Location: RPTU - Campus Kaiserslautern, Building 57, Room 315

OR

Zoom Link: uni-kl-de.zoom-x.de/j/67529416901

June 06, 2025 at 10:15am 

Title: Does Sadness Make Us More or Less Creative? An fNIRS Study on the Impact of Emotional States on Creative Thinking

Speaker: Zhino Ebrahimi  (Cognitive Science RPTU Kaiserslautern - supervisors: Thomas Lachman & Saskia Jaarsveld)

Abstract: The study investigated the effect of emotional states on creativity on behavioral, psychophysiological and brain functional level. To this end, we developed the Common Uses Task (CUT) to evaluate convergent thinking closely matching the task requirements of the Alternative Uses Task (AUT) often used to assess divergent thinking. In the AUT, participants generated multiple uncommon uses for an object, while the CUT required them to identify common uses for an object. Before each CUT and AUT period, a neutral or sad IAPS picture was used for emotion induction. Afterward, participants rated the pictures for valence and arousal. We measured brain function using fNIRS across 40 optical channels covering frontal, parietal and posterior temporal brain regions and sympathetic activity using GSR. Forty-five participants took part in the study. We conducted 2x2 (task × emotion) Bayesian rmANOVAs to assess evidence for differences or equivalences in ratings and channel-wise brain response during CUT and AUT. Our analysis provides very strong evidence (log10(BF10) > 2.0) supporting differences in valence and arousal ratings between neutral and sad IAPS pictures. Regarding brain activity, we observed strong evidence (log10(BF10) > 1.0) for increased bilateral activation in DLPFC during the AUT compared to the CUT. Additionally, in 22 out of 40 channels, encompassing DLPFC, the entire parietal region, and the posterior temporal cortex (bilaterally), we found moderate evidence (log10(BF10) < -0.5) against a task×emotion interaction. Our data provide evidence for the absence of an effect of emotional state on creativity-related brain activity.

Location: RPTU - Campus Kaiserslautern, Building 57, Room 315

OR 

Zoom Link:  https://uni-kl-de.zoom-x.de/j/67529416901?pwd=Q8aeQg0ZCKoR5Jz8bgqwjfwY7K7g3i.1

 

June 13, 2025 at 10:15am 

Title: Contributions to the Design and Application of Integrated Spiking Multi-Sensor Electronics with Self-X Properties for Future Robust Integrated Intelligent Systems

Speaker: Hamam Abd (Cognitive Integrated Sensor Systems - RPTU Kaiserslautern - supervisor: Andreas König)

Abstract: In this presentation, a novel solution is explored to a common challenge in modern electronics: how to design sensor interfaces that are energy-efficient, adaptable, and reliable, especially in changing or resource-limited environments. These challenges are becoming more pressing in areas like IoT, robotics, autonomous systems, and industrial automation, where sensors must operate under varying conditions without frequent recalibration. To address this, SAFEX is introduced a Neuromorphic Spiking Analog Front-End with Self-X Capabilities. Unlike conventional sensor interfaces that rely on fixed analog processing, SAFEX is inspired by how the brain processes information using spikes (brief electrical pulses), making it naturally more adaptive and efficient. 

The system is made up of two key components:

 • The Adaptive Sensor Signal-to-Spike Converter (ASSC) converts incoming analog signals into spike timings. It automatically adjusts to the input signal range without requiring extra amplification or level-shifting hardware. 

• The Self-Adaptive Spike-to-Digital Converter (SA-SDC) interprets the spike timings and produces digital output, adapting in real-time to maintain accuracy. 

This design allows the system to maintain performance even as environmental or power conditions change. For example, it significantly reduced signal distortion metrics: DNL was reduced from 0.6 to 0.34 LSB, and INL from 2.07 to 0.53 LSB. Even under reduced supply voltage, the system corrected previously missing codes and maintained performance. SAFEX was also validated using real-world magnetic sensors. When the signal strength dropped due to sensor repositioning, SAFEX adapted automatically and cut the measurement error nearly in half without requiring manual recalibration. In summary, SAFEX offers a smarter, more resilient way to process sensor data. Its built-in selfcalibration, self-optimization, and self-compensation make it a scalable and cost-effective platform for next-generation embedded and edge applications.  

Location: RPTU - Campus Kaiserslautern, Building 57, Room 315

OR

Zoom Link:  https://uni-kl-de.zoom-x.de/j/67529416901?pwd=Q8aeQg0ZCKoR5Jz8bgqwjfwY7K7g3i.1

July 04, 2025 at 10:15am 

Title: Suppression of Afferent Visual Information in a Patient with Dissociative Identity Disorder and State-Dependent Blindness

Speaker: Torsten Wüstenberg (University of Heidelberg; invited by Zhino Ebrahimi)

Abstract: Dissociative identity disorder (DID, DSM-IV-TR: 330.14) is one of the most mysterious – very likely trauma associated – psychiatric disorders. This disorder is characterized by a spectrum of symptoms that can range from highly cognitive to basal sensori-motor. Patients were often suspected of only simulating these symptoms. Due to the stochastic nature of symptom expression and remission, data on the neurophysiological basis of this disorder are rare. I’m going to present the case of patient BT suffering from DID after trauma who had been diagnosed as legally blind by neuro-ophthalmic examination including visual evoked potentials (VEPs). After 15 years of blindness, she regained sight and showed normal VEPs in one of her personality states although VEPs are still absent in a remaining blind state. Retinotopic mapping with functional MRI (fMRI) revealed that V1 and thalamic activity were absent during the blind but normal during the sighted state. Findings suggest that BT responded to her trauma with a blind personality state that was attained by complete suppression of visual input at the thalamic level. This case demonstrates a fundamental ability of the brain to block early visual processing in rapid response to personality changes to prevent conscious visual processing at cortical levels. This case is used to explain the use and possibilities of modern neuroscientific methods (EEG, fNRI: retinotopy, brain response and functional connectivity) for investigating extreme brain states and their value for our understanding of basic brain mechanisms like conscious vision.

Location: RPTU - Campus Kaiserslautern, Building 57, Room 315

OR 

Zoom Link:  https://uni-kl-de.zoom-x.de/j/67529416901?pwd=Q8aeQg0ZCKoR5Jz8bgqwjfwY7K7g3i.1

July 11, 2025 at 10:15am 

Title: Is ChatGPT the better scientist? Exploring human and AI-generated scientific creativity

Speaker: Vera Eymann (Cognitive Science RPTU Kaiserslautern)

Abstract: Within the last years, generative artificial intelligence (genAI) has entered the field of creativity, and might even mark a turning point for some creative domains (e.g., painting art, music creation, poetry, or creative story writing). This raises the question if AI does also pose a turning point for scientific creativity, which comprises the ability to develop new ideas or methodological approaches in science. In our study, we use a new scientific creativity task to investigate the extent to which artificial intelligence - in this case ChatGPT-4 - can generate creative ideas in a scientific context. Specifically, we compare AI-generated responses with those of graduate students in terms of their ability to generate scientific hypotheses, design experiments and justify their ideas for different fictitious research scenarios. We asked students to write and prompted ChatGPT to generate a brief version of a research proposal containing four separable assignments (i.e., formulating a hypothesis, designing an experiment, listing the required equipment, and justifying the chosen method). Using a structured (blinded) rating, experts from respective scientific the field evaluated students' research proposals and proposals generated by ChatGPT in terms of their scientific creativity. Our results indicate that ChatGPT received significantly higher overall scores, but even more crucially exceeded students in sub-scores measuring originality or meaningfulness of the ideas. In addition to a statistical evaluation, we qualitatively assess our data providing a more detailed report in regards to subtle differences between students’ and AI-generated responses. Lastly, challenges and potential opportunities for future research will be discussed.

Location: RPTU - Campus Kaiserslautern, Building 57, Room 315

OR 

Zoom Link:  https://uni-kl-de.zoom-x.de/j/67529416901?pwd=Q8aeQg0ZCKoR5Jz8bgqwjfwY7K7g3i.1