Lang N.Zincir I.Zincir-Heywood N.2023-06-162023-06-1620219.78E+122194-5357https://doi.org/10.1007/978-3-030-63128-4_52https://hdl.handle.net/20.500.14365/3369Future Technologies Conference, FTC 2020 -- 5 November 2020 through 6 November 2020 -- 251149In many real-world applications, a high number of words could result in noisy and redundant information, which could degrade the general performance of text classification tasks. Feature selection techniques with the purpose of eliminating uninformative words have been actively studied. In several information-theoretic approaches, such features are conventionally obtained by maximizing relevance to the class while the redundancy among the features used is minimized. This is an NP-hard problem and still remains to be a challenge. In this work, we propose an alternative feature selection strategy on binary representation data, with the purpose of providing a theoretical lower bound for finding a near optimal solution based on the Maximum Relevance-Minimum Redundancy criterion. In doing so, the proposed strategy can achieve a theoretical approximation ratio of 12 by a naive greedy search. The proposed strategy is validated by empirical experiments on five publicly available datasets, namely, Cora, Citeseer, WebKB, SMS Spam and Spambase. Their effectiveness is shown for binary text classification tasks when compared with well-known filter feature selection methods and mutual information-based methods. © 2021, Springer Nature Switzerland AG.eninfo:eu-repo/semantics/closedAccessBinary representationFeature selectionText classificationClassification (of information)Information theoryNP-hardRedundancyText processingBinary representationsEmpirical experimentsFeature selection methodsInformation-theoretic approachMaximum relevance minimum redundanciesNear-optimal solutionsSelection techniquesTheoretical approximationsFeature extractionBinary Text Representation for Feature SelectionConference Object10.1007/978-3-030-63128-4_522-s2.0-85096500961