Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

An End-to-End Trainable Capsule Network for Image-Based Character Recognition and its Application to Video Subtitle Recognition


Affiliations
1 Department of Computer Science, Biskra University, Algeria
     

   Subscribe/Renew Journal


The text presented in videos contains important information for a wide range of vision-based applications. The key modules for extracting this information include detection of text followed by its recognition, which are the subject of our study. In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos. Our system consists of three modules. Video subtitle are firstly detected by a novel image operator based on our blob extraction method. Then, the video subtitle is individually segmented as single characters by simple technique on the binary image and then passed to recognition module. Lastly, Capsule neural network (CapsNet) trained on Chars74K dataset is adopted for recognizing characters. The proposed detection method is robust and has good performance on video subtitle detection, which was evaluated on dataset we constructed. In addition, CapsNet show its validity and effectiveness for recognition of video subtitle. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for Character recognition of video subtitles.

Keywords

Capsule Networks, Convolutional Neural Networks, Subtitle Text Detection, Text Recognition.
Subscription Login to verify subscription
User
Notifications
Font Size


  • An End-to-End Trainable Capsule Network for Image-Based Character Recognition and its Application to Video Subtitle Recognition

Abstract Views: 368  |  PDF Views: 0

Authors

Ahmed Tibermacine
Department of Computer Science, Biskra University, Algeria
Selmi Mohamed Amine
Department of Computer Science, Biskra University, Algeria

Abstract


The text presented in videos contains important information for a wide range of vision-based applications. The key modules for extracting this information include detection of text followed by its recognition, which are the subject of our study. In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos. Our system consists of three modules. Video subtitle are firstly detected by a novel image operator based on our blob extraction method. Then, the video subtitle is individually segmented as single characters by simple technique on the binary image and then passed to recognition module. Lastly, Capsule neural network (CapsNet) trained on Chars74K dataset is adopted for recognizing characters. The proposed detection method is robust and has good performance on video subtitle detection, which was evaluated on dataset we constructed. In addition, CapsNet show its validity and effectiveness for recognition of video subtitle. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for Character recognition of video subtitles.

Keywords


Capsule Networks, Convolutional Neural Networks, Subtitle Text Detection, Text Recognition.

References