Comparative Study of Fine tuned BERT-based Models and RNN-Based Models. Case Study: Arabic Fake News Detection

Authors

  • Aljamel, A. Misurata University
  • Khalil, H. Misurata University
  • Aburawi, Y. Misurata University

DOI:

https://doi.org/10.36602/ijeit.v12i1.477

Keywords:

Natural Language Processing, , Deep Neural Network Learning, Large Language Models, Transformers, Word Embeddings, Hyper parameters

Abstract

Large Language Models (LLMs) are advanced language models with exceptional learning capabilities. Pre‑trained LLMs have achieved the most significant performances in multiple NLP tasks. BERT is one of LLMs that can be easily fine-tuned with one or more additional output layer to create a state‑of‑the‑art model for a wide range of downstream NLP tasks. There are several BERT models which are pre-trained specifically for Arabic Language. They showed high performance results. To investigate the performance of fine-tuned Arabic BERT‑based on downstream Arabic NLP tasks, four Arabic BERT-based Models have been selected to be fine-tuned for Arabic Fake News Detection. These Arabic BERT-based models have been compared with five RNN-based architecture models that have been trained for the same downstream Arabic NLP tasks. Then, the DNNL hyper‑parameters of those models have been tuned to fit the Arabic Fake News Detection dataset and improve their performance results. For RNN-based models, two embeddings’ techniques have been applied, local dataset embeddings generator and pre-trained Arabic Word2Vec embeddings generator. 

 

Downloads

Download data is not yet available.

Downloads

Published

2024-08-11

How to Cite

Comparative Study of Fine tuned BERT-based Models and RNN-Based Models. Case Study: Arabic Fake News Detection. (2024). The International Journal of Engineering & Information Technology (IJEIT), 12(1), 56-64. https://doi.org/10.36602/ijeit.v12i1.477

Similar Articles

11-20 of 110

You may also start an advanced similarity search for this article.