Time elapsed:00:00:00.0006028
Proj-14-LOSSLESS-DATA-COMPRESSION-HARDWARE-ARCHITECTURE | vlsi projects | electronics tutorial | Electronics Tutorial

All About Electronics

  • Mini Projects
  • MATLAB Projects
  • VLSI Projects
  • Arduino Projects
  • Project Ideas
  • Quiz
  • Digital-CMOS-Design CMOS-Inverter CMOS-Layout-Design CMOS-Logic-Gates MOS-Capacitor MOSFET-Fundamentals Non-Ideal-Effects Pass-Transistor-Logic Propagation-Delay




    Brief Introduction:

    LZW (Lempel Ziv Welch) and AH (Adaptive Huffman) algorithms were many favored for lossless information compression. But those two algorithms just take more memory for hardware execution. The project basically discuss about the appearance with this specific two-stage gear architecture with synchronous dictionary LZW algorithm 1st and Adaptive Huffman algorithm in the second thing. In this architecture, an bought list as opposed to the tree based structure is necessary within the AH algorithm for accelerating the compression information rate. The ensuing architecture shows it not only outperforms the AH algorithm through the cost of just one-fourth the equipment resource nonetheless it can also be competitive towards the performance of LZW algorithm (compress). In addition, both compression and decompression prices in regards to the proposed architecture are far more than those with this AH algorithm also within the situation acknowledged by computer software. Three various schemes of adaptive Huffman algorithm have been produced called AHAT, AHFB and AHDB algorithm. Compression ratios are determined and results are weighed against Adaptive Huffman algorithm that is implemented in C language. AHDB algorithm provides good performance contrasted to AHAT and AHFB algorithms. The performance regarding the PDLZW algorithm is enhanced by including it because of the AH algorithm. The two period algorithm is mentioned to enhance compression ratio with PDLZW algorithm in very very first AHDB and phase in 2nd phase. Outcomes may be found in contrast to LZW (compress) and AH algorithm. The part of information compression increases far more than 5% by cascading with adaptive algorithm, which suggests as possible utilize a smaller dictionary size within the PDLZW algorithm in case memory dimensions are restricted and then utilize AH algorithm because the 2nd phase to cover the increasing loss of the part of information decrease. The Proposed two–stage compression/decompression processors happen coded utilizing Verilog HDL language, simulated in Xilinx ISE 9.1 and synthesized by Synopsys

    Hardware Details:

    • AHDB Processor
    • FPGA

    Software Details:

    • Verilog HDL language,
    • Xilinx ISE 9.1
    • Synopsys
    • C language

    For more details