Open Access
Subscription Access
Open Access
Subscription Access
Design and Implementation of Parallel MLP-Back Propagation Neural Network Technique for Image Compression and Decompression on FPGA
Subscribe/Renew Journal
Due to the increasing traffic caused by multimedia information and digitized form of representation of images, image compression become a necessity. The Parallel multilayer perceptron (MLP)[1] back propagation neural network Distributive Arithmetic (DA) architecture has a reduced latency and a good throughput. This design is twice faster than the simple Distributive Arithmetic Architecture and is thus suitable for applications that require high speed image processing algorithms and implemented using IP core and chipscope on vertex-2 Pro (2vp30- ff896-7).
Keywords
Parallel Back Propagation Neural Network (MLP), Distributive Arithmetic (DA), Look-Up Table (LUT), FIR Filter.
User
Subscription
Login to verify subscription
Font Size
Information
Abstract Views: 249
PDF Views: 2