Comparative Study of Fine tuned BERT-based Models and RNN-Based Models. Case Study: Arabic Fake News Detection

Main Article Content

Aljamel, A.
Khalil, H.
Aburawi, Y.

Abstract

Large Language Models (LLMs) are advanced language models with exceptional learning capabilities. Pre‑trained LLMs have achieved the most significant performances in multiple NLP tasks. BERT is one of LLMs that can be easily fine-tuned with one or more additional output layer to create a state‑of‑the‑art model for a wide range of downstream NLP tasks. There are several BERT models which are pre-trained specifically for Arabic Language. They showed high performance results. To investigate the performance of fine-tuned Arabic BERT‑based on downstream Arabic NLP tasks, four Arabic BERT-based Models have been selected to be fine-tuned for Arabic Fake News Detection. These Arabic BERT-based models have been compared with five RNN-based architecture models that have been trained for the same downstream Arabic NLP tasks. Then, the DNNL hyper‑parameters of those models have been tuned to fit the Arabic Fake News Detection dataset and improve their performance results. For RNN-based models, two embeddings’ techniques have been applied, local dataset embeddings generator and pre-trained Arabic Word2Vec embeddings generator. 


 

Article Details

How to Cite
Aljamel, A., H., K., & Aburawi, Y. (2024). Comparative Study of Fine tuned BERT-based Models and RNN-Based Models. Case Study: Arabic Fake News Detection. The International Journal of Engineering & Information Technology (IJEIT), 12(1), 56–64. https://doi.org/10.36602/ijeit.v12i1.477
Section
Artical