Decision Support System Design for Determining Exemplary Lecturer using Simple Additive Weighting (SAW)
##plugins.themes.academic_pro.article.main##
Abstract
STMIK AKAKOM has 67 lecturers. In each semester, an evaluation is held to determine lecturers’ performance to maintain good institutional quality. The evaluation process is based on students’ and Department Quality-Assurance Team (DQAT) assessments. Up until now, the results of these evaluations were left unprocessed. This study aimed to determine well-performed and less-performed lecturers by combining evaluation results from students and DQAT using the simple additive weighting (SAW) method. There are 17 criteria used in this study with different weight values. The results showed that the technique determined the lecturers’ ranking significantly based on their respective performance. The most well-performed lecturer is L40 with Vi (order for lecturer) value of 0.95, the second is L41 with Vi value of 0.92, and the third one is L25 with Vi of 0.91, while the most under-performed lecturer is L67 with a Vi value of 0.72.
##plugins.themes.academic_pro.article.details##
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
References
[2] A. Rawat, A. Kumar, P. Upadhyay, and S. Kumar, “Deep learning-based models for temporal satellite data processing: Classification of paddy transplanted fields,†Ecol. Inform., vol. 61, no. November 2020, p. 101214, 2021, DOI: 10.1016/j.ecoinf.2021.101214.
[3] E. N. Arrofiqoh and H. Harintaka, “Implementasi Metode Convolutional Neural Network Untuk Klasifikasi Tanaman Pada Citra Resolusi Tinggi,†Geomatika, vol. 24, no. 2, p. 61, 2018, doi: 10.24895/jig.2018.24-2.810.
[4] M. Sapti, “済無No Title No Title,†Kemamp. Koneksi Mat. (Tinjauan Terhadap Pendekatan Pembelajaran Savi), vol. 53, no. 9, pp. 1689–1699, 2019.
[5] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. F. Li, “Large-scale video classification with convolutional neural networks,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1725–1732, 2014, DOI: 10.1109/CVPR.2014.223.
[6] L. Liu, J. Chen, P. Fieguth, G. Zhao, R. Chellappa, and M. Pietikäinen, “From BoW to CNN: Two Decades of Texture Representation for Texture Classification,†Int. J. Comput. Vis., vol. 127, no. 1, pp. 74–109, 2019, DOI: 10.1007/s11263-018-1125-z.
[7] A. Patil and M. Rane, “Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition,†Smart Innov. Syst. Technol., vol. 195, pp. 21–30, 2021, DOI: 10.1007/978-981-15-7078-0_3.
[8] H. Feng, W. Lin, W. Shang, J. Cao, and W. Huang, “MLP and CNN-based classification of points of interest in side-channel attacks,†Int. J. Networked Distrib. Comput., vol. 8, no. 2, pp. 108–117, 2020, DOI: 10.2991/IJNDC.K.200326.001.
[9] V. Maha, P. Salawazo, D. Putra, J. Gea, F. Teknologi, and U. P. Indonesia, “Implementasi Metode Convolutional Neural Network ( CNN ) Pada Peneganalan Objek Video Cctv,†J. Mantik Penusa, vol. 3, no. 1, pp. 74–79, 2019.
[10] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep Learning for Computer Vision: A Brief Review,†Comput. Intell. Neurosci., vol. 2018, 2018, DOI: 10.1155/2018/7068349.