Cross-Attention Fusion of Visual and Geometric Features for Large Vocabulary Arabic Lipreading

Samar Daou1, Ahmed Rekik1,2, Achraf Ben-Hamadou1,2, Abdelaziz Kallel1,2
1Laboratory of Signals, systeMs, aRtificial Intelligence and neTworkS, 2Digital Research Centre of Sfax, Tunisia
Interpolate start reference image.

A selection of speakers from the LRW-AR dataset showcasing diverse facial expressions and lip movements.

Abstract

Lipreading involves using visual data to recognize spoken words by analyzing the movements of the lips and surrounding area. It is a hot research topic with many potential applications, such as human-machine interaction and enhancing audio speech recognition. Recent deep-learning based works aim to integrate visual features extracted from the mouth region with landmark points on the lip contours. However, employing a simple combination method such as concatenation may not be the most effective approach to get the optimal feature vector. To address this challenge, firstly, we propose a cross-attention fusion based approach for large lexicon Arabic vocabulary to predict spoken words in videos. Our method leverages the power of cross-attention networks to efficiently integrate visual and geometric features computed on the mouth region. Secondly, we introduce the first large-scale Arabic lipreading dataset (LRW-AR) containing 20,000 videos for 100-word classes, uttered by 36 speakers. The experimental results obtained on LRW-AR and ArabicVisual databases showed the effectiveness and robustness of the proposed approach in recognizing Arabic words. Our work provides insights into the feasibility and effectiveness of applying lipreading techniques to the Arabic language, opening doors for further research in this field.

LRW-AR Lipreading Dataset

We collect a naturally distributed large-scale lip reading dataset for Arabic language by scrapping videos from Youtube platform of talking people from News TV programs.

Data processing was done via an automated pipeline :

Using this pipeline we were able to obtain 100 classes , each corresponding to a different word.

LRW-AR dataset involves a total of 36 speakers .

Were each word was uttered 200 times , resulting in a total of 20,000 video samples .

Method overview

Using a fusion method FusionNet with cross-attention networks, our approach enhances visual and geometric features integration for more accurate Arabic word prediction in lipreading. Additionally, the creation of LRW-AR, the first large-scale Arabic lipreading dataset, shows the effectiveness and feasibility of lipreading techniques in the Arabic language.

Video preprocessing: crop the mouth region from the input video sequence and obtain the corresponding facial landmarks.

Visual-feature network: extract relevant information from the preprocessed data.

Geometric-feature network: encodes lip contour variation delivered by facial landmarks.

FusionNet network: fuses the encoded features.

Sequence back-end network: based on a multi-scale temporal convolutional network (MS-TCN) to encode temporal variation and classify the input video sequence.

BibTeX


@misc{daou2024crossattention,
      title={Cross-Attention Fusion of Visual and Geometric Features for Large Vocabulary Arabic Lipreading},
      author={Samar Daou and Ahmed Rekik and Achraf Ben-Hamadou and Abdelaziz Kallel},
      year={2024},
      eprint={2402.11520},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}