Blended Multi-Linguistic System using Transformer Neural Network for Word Sense Disambiguation
Abstract
Word sense disambiguation (WSD) in multilingual contexts remains a significant challenge in natural language processing (NLP), primarily due to the inherent ambiguity of natural language. Words often have multiple meanings, and the task of WSD is to identify the correct sense of a word in a given context. Despite extensive research in this area, WSD continues to pose significant challenges, especially in multilingual contexts where linguistic diversity adds further complexity.This paper introduces a novel multi-linguistic system using Transformer Neural Networks to improve WSD across multiple languages. By combining contextualized word embedding’s from pre-trained multilingual models with a fine-tuned Transformer architecture, the system captures semantic nuances effectively. Evaluation on standard WSD benchmarks shows significant accuracy improvements over traditional and state-of-the-art methods, with robust performance across languages, including zero-shot scenarios. This paper highlights the benefits of a multi-linguistic approach in enhancing model interpretability, generalization, and inclusivity for more versatile NLP applications. Here we proposed an integrated multilingual transformer neural network (IMTNN) which blends two neural networks based on transformer model for translation and word sensing process. This network has different layers with nodes and each nodes can perform transformer-based process which helps in reducing complexity independently. For these we used different corpus from SemCor,IMS and WordNet to calculate the Collocation score for different words and their relations. This provides more accuracy and increases the speed in retrieving related results.
Downloads
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.