Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Kayis, Hakan"

Filter results by typing the first few letters
Now showing 1 - 3 of 3
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Leveraging Point-of-View Camera and MediaPipe for Objective Hyperactivity Assessment in Preschool ADHD
    (Frontiers Media SA, 2026) Kayis, Hakan; Gedizlioglu, Cinar
    Background Attention-Deficit/Hyperactivity Disorder (ADHD) often emerges in early childhood, with hyperactivity and impulsivity constituting the most prominent symptoms during the preschool period. Current assessment approaches rely largely on clinical interviews and behavior rating scales, which are susceptible to subjectivity and contextual bias. Objective, ecologically valid, and low-burden methods for quantifying hyperactivity in preschool settings remain limited.Methods This observational, cross-sectional study investigated whether movement-based features extracted from teacher-worn point-of-view (POV) video recordings could differentiate preschool children at risk for ADHD-related hyperactivity from non-hyperactive peers. Fifty-one preschool children (48-60 months) participated in a standardized, three-minute storytelling interaction conducted in a familiar classroom environment. Video recordings were processed using MediaPipe pose estimation to derive region-specific movement indices across multiple body segments. Group differences were examined using statistical analyses. In addition, supervised machine learning models were applied to evaluate classification performance based on movement features.Results Children in the hyperactivity-risk group exhibited significantly greater movement across several body regions, particularly distal upper- and lower-limb segments, compared to non-hyperactive peers. Machine learning analyses indicated promising classification performance, with the support vector machine achieving an accuracy of 84.31%, sensitivity of 80.0%, specificity of 87.10%, and an area under the receiver operating characteristic curve (AUC) of 0.83. Permutation-based feature importance analyses highlighted distal limb movements as the most informative features for classification.Conclusions These findings suggest that POV-based, vision-driven assessment provides a promising, objective, and ecologically valid approach for quantifying hyperactivity-related motor behavior in preschool children. While not intended as a standalone diagnostic tool, this low-burden framework may serve as a valuable complement to existing screening practices and support early identification efforts in educational settings.
  • Loading...
    Thumbnail Image
    Article
    A New Approach in Autism Diagnosis: Evaluating Natural Interaction Using Point of View (POV) Glasses
    (Elsevier, 2026) Kayis, Hakan; Celik, Murat; Gedizlioglu, Cinar; Kayis, Elif; Aydemir, Cumhur; Hatipoglu, Arda; Ozbaran, Burcu
    This study introduces an AI-assisted method based on examiner-worn Point of View (POV) glasses and computer vision analysis to provide objective behavioral data for the diagnosis of Autism Spectrum Disorder (ASD). The study included 29 children with ASD and 27 children without ASD, aged between 17 and 36 months. During semi-structured naturalistic interactions, the examiner wore POV glasses equipped with a scene camera that captured the child's face from an eye-level perspective, preserving ecological validity. Behavioral parameters-including facial expressions, approximate social gaze (operationalized as the child's eyes orientation toward the POV camera), and head mobility-were extracted using OpenFace and MediaPipe and subsequently analyzed with machine learning techniques. Statistical analyses revealed that total social gaze duration, longest social gaze, social smiling, number of responses to name, response latency, response duration, social responsiveness, and head movements along the z-axis had p-values <= 0.05, while head movements on the x- and y-axes, total head movement, and rapid head movements had p-values > 0.05. The classification model developed using decision trees and the AdaBoost algorithm demonstrated high performance, achieving an accuracy of 91.07 % and a sensitivity of 89.65 %. These findings support the clinical applicability of examiner-worn POV recordings for early ASD detection and highlight their potential to complement traditional, subjective assessment methods.
  • Loading...
    Thumbnail Image
    Article
    A Novel Approach to Depression Detection Using POV Glasses and Machine Learning for Multimodal Analysis
    (Frontiers Media SA, 2025) Kayis, Hakan; Celik, Murat; Kardes, Vildan Cakir; Karabulut, Hatice Aysima; Ozkan, Ezgi; Gedizlioglu, Cinar; Atasoy, Nuray; Çakır Kardeş, Vildan
    Background Major depressive disorder (MDD) remains challenging to diagnose due to its reliance on subjective interviews and self-reports. Objective, technology-driven methods are increasingly needed to support clinical decision-making. Wearable point-of-view (POV) glasses, which capture both visual and auditory streams, may offer a novel solution for multimodal behavioral analysis.Objective This study investigated whether features extracted from POV glasses, analyzed with machine learning, can differentiate individuals with MDD from healthy controls.Methods We studied 44 MDD patients and 41 age/sex-matched HCs (18-55 years). During semi-structured interviews, POV glasses recorded video and audio data. Visual features included gaze distribution, smiling duration, eye-blink frequency, and head movements. Speech features included response latency, silence ratio, and word count. Recursive feature elimination was applied. Multiple classifiers were evaluated, and the primary model-ExtraTrees-was assessed using leave-one-out cross-validation.Results After Bonferroni correction, smiling duration, center gaze and happy face duration showed significant group differences. The multimodal classifier achieved an accuracy of 84.7%, sensitivity of 90.9%, specificity of 78%, and an F1 score of 86%.Conclusions POV glasses combined with machine learning successfully captured multimodal behavioral markers distinguishing MDD from controls. This low-burden, wearable approach demonstrates promise as an objective adjunct to psychiatric assessment. Future studies should evaluate its generalizability in larger, more diverse populations and real-world clinical settings.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback