Assessing Suitable Word Embedding Model for Malay Language through Intrinsic Evaluation

Phua, Y.-T. and Yew, K.-H. and Foong, O.-M. and Teow, M.Y.-W. (2020) Assessing Suitable Word Embedding Model for Malay Language through Intrinsic Evaluation. In: UNSPECIFIED.

Full text not available from this repository.
Official URL: https://www.scopus.com/inward/record.uri?eid=2-s2....

Abstract

Word embeddings were created to form meaningful representation for words in an efficient manner. This is an essential step in most of the Natural Language Processing tasks. In this paper, different Malay language word embedding models were trained on Malay text corpus. These models were trained using Word2Vec and fastText using both CBOW and Skip-gram architectures, and GloVe. These trained models were tested on intrinsic evaluation for semantic similarity and word analogies. In the experiment, the custom-trained fastText Skip-gram model achieved 0.5509 for Pearson correlation coefficient at word similarity evaluation, and 36.80 for accuracy at word analogies evaluation. The result outperformed the fastText pre-trained models which only achieved 0.477 and 22.96 for word similarity evaluation and word analogies evaluation, respectively. The result shows that there is still room for improvement in both pre-processing tasks and datasets for evaluation. © 2020 IEEE.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Impact Factor: cited By 0
Uncontrolled Keywords: Correlation methods; Embeddings; Intelligent computing; Semantics, Gram models; Malay languages; Malay texts; NAtural language processing; Pearson correlation coefficients; Pre-processing; Semantic similarity; Word similarity, Natural language processing systems
Depositing User: Ms Sharifah Fahimah Saiyed Yeop
Date Deposited: 25 Mar 2022 03:05
Last Modified: 25 Mar 2022 03:05
URI: http://scholars.utp.edu.my/id/eprint/29870

Actions (login required)

View Item
View Item