Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Compact Vision Sensor Based Assistive Text Reading Technology for Visually Blind Persons


Affiliations
1 Department of CSE, V.R.S. College of Engineering and Technology, Arasur, Villupuram, India
     

   Subscribe/Renew Journal


We propose a cam based assistive content perusing system to help blind persons read content names and item bundling from hand-held questions in their day by day lives. To confine the item from jumbled foundations or other encompassing protests in the cam view, we first propose a proficient and compelling movement based system to characterize a locale of investment (return for capital invested) in the feature by asking the client to shake the article. This system concentrates moving article locale by a mixture-of-Gaussians-based foundation sub-footing strategy. In the removed return for capital invested, content restriction and recognition are directed to secure content data. To consequently confine the content districts from the item return for money invested, we propose a novel content limitation calculation by learning angle gimmicks of stroke introductions and circulations of edge pixels in an Adaboost model. Content characters in the confined content areas are then binarized and perceived by off-the-rack optical character distinguishment delicate product. The perceived content codes are yield to visually impaired clients in discourse. Execution of the proposed content limitation calculation is quantitatively assessed on ICDAR-2003 and ICDAR-2011 Strong Reading Datasets. Exploratory results exhibit that our calculation accomplishes the condition of human expressions. The verification of-idea model is additionally assessed on a dataset gathered utilizing ten visually impaired persons to assess the viability of the framework's equipment. We investigate client between face issues and survey strength of the calculation in removing and perusing content from distinctive articles with complex foundations.

Keywords

Assistive Devices, Visual Insufficiency, Scattering of Edge Pixels, Hand-Held Things, Optical Character Recognition (OCR), Stroke Presentation, Substance Scrutinizing, Content Zone Confinement.
User
Subscription Login to verify subscription
Notifications
Font Size

Abstract Views: 255

PDF Views: 4




  • Compact Vision Sensor Based Assistive Text Reading Technology for Visually Blind Persons

Abstract Views: 255  |  PDF Views: 4

Authors

F. Destonius Dhiraviam
Department of CSE, V.R.S. College of Engineering and Technology, Arasur, Villupuram, India
J. Saranya
Department of CSE, V.R.S. College of Engineering and Technology, Arasur, Villupuram, India
E. Sivasankari
Department of CSE, V.R.S. College of Engineering and Technology, Arasur, Villupuram, India
S. Susikala
Department of CSE, V.R.S. College of Engineering and Technology, Arasur, Villupuram, India

Abstract


We propose a cam based assistive content perusing system to help blind persons read content names and item bundling from hand-held questions in their day by day lives. To confine the item from jumbled foundations or other encompassing protests in the cam view, we first propose a proficient and compelling movement based system to characterize a locale of investment (return for capital invested) in the feature by asking the client to shake the article. This system concentrates moving article locale by a mixture-of-Gaussians-based foundation sub-footing strategy. In the removed return for capital invested, content restriction and recognition are directed to secure content data. To consequently confine the content districts from the item return for money invested, we propose a novel content limitation calculation by learning angle gimmicks of stroke introductions and circulations of edge pixels in an Adaboost model. Content characters in the confined content areas are then binarized and perceived by off-the-rack optical character distinguishment delicate product. The perceived content codes are yield to visually impaired clients in discourse. Execution of the proposed content limitation calculation is quantitatively assessed on ICDAR-2003 and ICDAR-2011 Strong Reading Datasets. Exploratory results exhibit that our calculation accomplishes the condition of human expressions. The verification of-idea model is additionally assessed on a dataset gathered utilizing ten visually impaired persons to assess the viability of the framework's equipment. We investigate client between face issues and survey strength of the calculation in removing and perusing content from distinctive articles with complex foundations.

Keywords


Assistive Devices, Visual Insufficiency, Scattering of Edge Pixels, Hand-Held Things, Optical Character Recognition (OCR), Stroke Presentation, Substance Scrutinizing, Content Zone Confinement.