Yaycı, Zeynep Övgü

Loading...
Profile Picture
Name Variants
Yayci, Zeynep Ovgu
Job Title
Email Address
zeynep.yayci@ieu.edu tr
Main Affiliation
05.01. Aerospace Engineering
Status
Current Staff
Website
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID

Sustainable Development Goals

SDG data is not available
Documents

3

Citations

7

h-index

2

Documents

2

Citations

2

Scholarly Output

3

Articles

0

Views / Downloads

0/0

Supervised MSc Theses

0

Supervised PhD Theses

0

WoS Citation Count

2

Scopus Citation Count

7

WoS h-index

1

Scopus h-index

2

Patents

0

Projects

0

WoS Citations per Publication

0.67

Scopus Citations per Publication

2.33

Open Access Source

0

Supervised Theses

0

JournalCount
32nd European Signal Processing Conference (EUSIPCO) -- AUG 26-30, 2024 -- Lyon, FRANCE1
Proceedings - 2021 Innovations in Intelligent Systems and Applications Conference, ASYU 20211
Proceedings - International Conference on Image Processing, ICIP1
Current Page: 1 / 1

Scopus Quartile Distribution

Competency Cloud

GCRIS Competency Cloud

Scholarly Output Search Results

Now showing 1 - 3 of 3
  • Conference Object
    Citation - WoS: 2
    Citation - Scopus: 2
    MICROSCALE IMAGE ENHANCEMENT VIA PCA AND WELL-EXPOSEDNESS MAPS
    (IEEE Computer Society, 2022) Yayci Z.O.; Dura U.; Kaya Z.B.; Cetin A.E.; Türkan, Mehmet
    The restrictions of accessing high-end microscopes, microscale cameras and high-tech imaging lenses result in a high demand on low-cost microscopes. However, low-cost microscopes are facing with many image capture and quality limitations due to incompatible equipped instrumentation. This study aims at overcoming illumination and contrast problems, color aberration issues, and blur and noise corruption in low-cost microscopes at high image magnification rates. The three color channels of the input image are enhanced via principal component analysis and well-exposedness feature maps by means of cross-channel histogram matching, Laplacian and non-local means filtering. The proposed approach produces sharper, and better color and illumination fixed outputs when compared to existing methods in literature. © 2022 IEEE.
  • Conference Object
    Citation - Scopus: 4
    Audio-Visual Speech Recognition Using 3d Convolutional Neural Networks
    (Institute of Electrical and Electronics Engineers Inc., 2021) Belhan C.; Fikirdanis D.; Cimen O.; Pasinli P.; Akgun Z.; Yayci Z.O.; Türkan, Mehmet
    Lip reading, described as extracting speech data from the observable deeds in the face, particularly the jaws, lips, tongue and teeth, is a very challenging task. It is indeed a beneficial skill that helps people to comprehend and interpret the content of other people's speech, when it is not sufficient to recognize either audio or expression. Even experts require a certain level of experience and need an understanding of visual expressions to interpret spoken words. However, this may not be efficient enough for the process. Nowadays, lip sequences can be converted into expressive words and phrases with the aid of computers. Thus, the usage of neural networks (NNs) is increased rapidly in this field. The main contribution of this study is to use Short-Time Fourier Transformed (STFT) audio data as an extra image input and employing 3D Convolutional NNs (CNNs) for feature extraction. This generates features considering the change in consecutive frames and makes use of visual and auditory data together with the attributes from the image. After testing several experimental scenarios, it turns out to be the proposed method has a strong promise for further development in this research domain. © 2021 IEEE.
  • Conference Object
    Citation - Scopus: 1
    Sparse Features for Multi-Exposure Fusion
    (IEEE, 2024) Yayci, Zeynep Ovgu; Turkan, Mehmet
    High dynamic range (HDR) capture and display devices can be used to approximately mimic the human perception of gamut of colors and fine details. However, the relative high-cost of these devices may currently make them be not affordable for many consumers. Multi-exposure image fusion (MEF) offers a cost-effective software-based solution to this problem. By fusing low dynamic range (LDR) images with different exposure levels, MEF aims to create HDR-like images for LDR display devices, that are high in quality but low in cost. This study proposes a novel MEF weight-map extraction method using sparse signal representations and k-means clustering. A preprocessing stage extracts initial masks from over- and underexposed images to be used for weight map extraction and the proposed clustering model allows the overall algorithm to have good fusion performance regardless of the number of input images contained in the input exposure sequence. After a final multi-scale pyramidal fusion, the resulting HDR-like images show not only visually pleasing but also statistically significant results when compared to state-of-the-art methods in the literature.