Browsing by Author "Gedizlioglu, Cinar"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Article A New Approach in Autism Diagnosis: Evaluating Natural Interaction Using Point of View (POV) Glasses(Elsevier, 2026) Kayis, Hakan; Celik, Murat; Gedizlioglu, Cinar; Kayis, Elif; Aydemir, Cumhur; Hatipoglu, Arda; Ozbaran, BurcuThis study introduces an AI-assisted method based on examiner-worn Point of View (POV) glasses and computer vision analysis to provide objective behavioral data for the diagnosis of Autism Spectrum Disorder (ASD). The study included 29 children with ASD and 27 children without ASD, aged between 17 and 36 months. During semi-structured naturalistic interactions, the examiner wore POV glasses equipped with a scene camera that captured the child's face from an eye-level perspective, preserving ecological validity. Behavioral parameters-including facial expressions, approximate social gaze (operationalized as the child's eyes orientation toward the POV camera), and head mobility-were extracted using OpenFace and MediaPipe and subsequently analyzed with machine learning techniques. Statistical analyses revealed that total social gaze duration, longest social gaze, social smiling, number of responses to name, response latency, response duration, social responsiveness, and head movements along the z-axis had p-values <= 0.05, while head movements on the x- and y-axes, total head movement, and rapid head movements had p-values > 0.05. The classification model developed using decision trees and the AdaBoost algorithm demonstrated high performance, achieving an accuracy of 91.07 % and a sensitivity of 89.65 %. These findings support the clinical applicability of examiner-worn POV recordings for early ASD detection and highlight their potential to complement traditional, subjective assessment methods.Article A Novel Approach to Depression Detection Using POV Glasses and Machine Learning for Multimodal Analysis(Frontiers Media SA, 2025) Kayis, Hakan; Celik, Murat; Kardes, Vildan Cakir; Karabulut, Hatice Aysima; Ozkan, Ezgi; Gedizlioglu, Cinar; Atasoy, NurayBackground Major depressive disorder (MDD) remains challenging to diagnose due to its reliance on subjective interviews and self-reports. Objective, technology-driven methods are increasingly needed to support clinical decision-making. Wearable point-of-view (POV) glasses, which capture both visual and auditory streams, may offer a novel solution for multimodal behavioral analysis.Objective This study investigated whether features extracted from POV glasses, analyzed with machine learning, can differentiate individuals with MDD from healthy controls.Methods We studied 44 MDD patients and 41 age/sex-matched HCs (18-55 years). During semi-structured interviews, POV glasses recorded video and audio data. Visual features included gaze distribution, smiling duration, eye-blink frequency, and head movements. Speech features included response latency, silence ratio, and word count. Recursive feature elimination was applied. Multiple classifiers were evaluated, and the primary model-ExtraTrees-was assessed using leave-one-out cross-validation.Results After Bonferroni correction, smiling duration, center gaze and happy face duration showed significant group differences. The multimodal classifier achieved an accuracy of 84.7%, sensitivity of 90.9%, specificity of 78%, and an F1 score of 86%.Conclusions POV glasses combined with machine learning successfully captured multimodal behavioral markers distinguishing MDD from controls. This low-burden, wearable approach demonstrates promise as an objective adjunct to psychiatric assessment. Future studies should evaluate its generalizability in larger, more diverse populations and real-world clinical settings.

