Neurobehavioral Signatures of AI vs Natural Image Perception

Supervisors: Simon Ruffieux, Sami Rima, Denis Lalanne

Contact person: Simon Ruffieux

Student: Looking for student

Project status: Open

Year: 2025

As AI-generated visuals become indistinguishable from real-world imagery, the ability to detect synthetic content is increasingly relevant to media trust, misinformation prevention, and digital ethics. This study examines how humans process and differentiate AI-generated vs real images, integrating eye-tracking, EEG, and electrodermal activity (EDA) to uncover cognitive and emotional markers of perception.

Participants from three distinct groups—general population (GP), creative professionals (CP), and AI/media specialists (AI/MS)—will view AI-generated and real images (faces, landscapes, abstract patterns) while their visual attention patterns, neural responses, and autonomic arousal are recorded. Eye-tracking will analyze fixation duration, saccades, and attentional shifts, while EEG markers (N170, P300, LPP) will identify differences in early visual processing, expectation violations, and emotional engagement. EDA measurements will capture physiological arousal in response to AI distortions, indicating subconscious recognition of unnatural features.

We hypothesize that AI-generated images will elicit longer fixations, increased saccadic movements, and heightened cognitive load, particularly when distortions are subtle. Neural markers (P300) and EDA responses are expected to peak in near-realistic but imperfect AI images, signaling subconscious detection of anomalies. Population differences will further refine insights—creative professionals may exhibit more efficient gaze patterns, while AI/media specialists may display faster but expectation-driven classification errors.

Findings will contribute to predictive models of AI perception, informing AI ethics, media verification, and user experience design. By identifying the neurobehavioral correlates of AI-generated media perception, this research will support efforts to enhance media literacy, improve AI content generation, and develop biometric tools for misinformation detection, ensuring a more informed and resilient digital society.

Keywords: generative AI, AI images, nuser study, digital neurosciences.

Document: Not yet available