Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14365/2815
Title: A Proposal for Corpus Normalization
Authors: Karaoglan, Bahar
Kisla, Tarik
Dincer, Bekir Taner
Metin, Senem Kumova
Keywords: language model performance
corpus comparison
cross entropy
Publisher: IEEE
Abstract: In order to compare work done under natural language processing, the corpora involved in different studies should be standardized/normalized. Entropy, used as language model performance metric, totally depends on signal information. Whereas, when language is considered semantic information should also be considered. Here we propose a metric that exploits Zipf's and Heaps' power laws to respresent semantic information in terms of signal information and estimates the amount of information anticipated from a corpus of given length in words. The proposed metric is tested on 20 different lengths of sub-corpora drawn from major corpus in Turkish (METU). While the entropy changed depending on the length of the corpus, the value of our proposed metric stayed almost constant which supports our claim about normalizing the corpus.
Description: 21st Signal Processing and Communications Applications Conference (SIU) -- APR 24-26, 2013 -- CYPRUS
URI: https://hdl.handle.net/20.500.14365/2815
ISBN: 978-1-4673-5563-6
978-1-4673-5562-9
ISSN: 2165-0608
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection

Files in This Item:
File SizeFormat 
2815.pdf
  Until 2030-01-01
323.29 kBAdobe PDFView/Open    Request a copy
Show full item record



CORE Recommender

Page view(s)

86
checked on Sep 30, 2024

Download(s)

6
checked on Sep 30, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.