Published26 Sep Abstract The core of every microprocessor and digital signal processor is its data path. The heart of data-path and addressing units in turn are arithmetic units which include adders. Parallel-prefix adders offer a highly efficient solution to the binary addition problem and are well suited for VLSI implementations. This paper involves the design and comparison of high-speed, parallel-prefix adders such as Kogge-Stone, Brent-Kung, Sklansky, and Kogge-Stone Ling adders. It is found that Kogge-Stone Ling adder performs much efficiently when compared to the other adders. Here, Kogge-Stone Ling adders and ripple adders are incorporated as a part of a lattice filter in order to prove their functionalities.
|Published (Last):||27 September 2010|
|PDF File Size:||15.71 Mb|
|ePub File Size:||6.95 Mb|
|Price:||Free* [*Free Regsitration Required]|
Published26 Sep Abstract The core of every microprocessor and digital signal processor is its data path. The heart of data-path and addressing units in turn are arithmetic units which include adders. Parallel-prefix adders offer a highly efficient solution to the binary addition problem and are well suited for VLSI implementations.
This paper involves the design and comparison of high-speed, parallel-prefix adders such as Kogge-Stone, Brent-Kung, Sklansky, and Kogge-Stone Ling adders. It is found that Kogge-Stone Ling adder performs much efficiently when compared to the other adders. Here, Kogge-Stone Ling adders and ripple adders are incorporated as a part of a lattice filter in order to prove their functionalities. It is seen that the operating frequency of lattice filter increases if parallel prefix Kogge-Stone Ling adder is used instead of ripple adders since the combinational delay of Kogge-Stone Ling adder is less.
Further, design and comparison of different tree adder structures are performed using both CMOS logic and transmission gate logic. Using these adders, unsigned and signed comparators are designed as an application example and compared with their performance parameters such as area, delay, and power consumed. The design and simulations are done using 65 nm CMOS design library. Introduction Binary addition is one of the most primitive and most commonly used applications in computer arithmetic.
A large variety of algorithms and implementations have been proposed for binary addition [ 1 — 3 ]. Parallel-prefix adder tree structures such as Kogge-Stone [ 4 ], Sklansky [ 5 ], Brent-Kung [ 6 ], Han-Carlson [ 7 ], and Kogge-Stone using Ling adders [ 8 , 9 ] can be used to obtain higher operating speeds. Parallel-prefix adders are suitable for VLSI implementation since they rely on the use of simple cells and maintain regular connections between them.
VLSI integer adders are critical elements in general purpose and digital-signal processors since they are employed in the design of Arithmetic-Logic Units, floating-point arithmetic data paths, and in address generation units.
Moreover, digital signal processing makes extensive use of addition in the implementation of digital filters, either directly in hardware or in specialized digital signal processors DSPs.
In integer addition, any decrease in delay will directly relate to an increase in throughput. In nanometer range, it is very important to develop addition algorithms that provide high performance while reducing power consumption.
The requirements of the adder are that it should be primarily fast and secondarily efficient in terms of power consumption and chip area. For wide adders , the delay of carry look-ahead adders becomes dominated by the delay of passing the carry through the look-ahead stages.
This delay can be reduced by looking ahead across the look-ahead blocks. In general, we can construct a multilevel tree of look-ahead structures to achieve delay that grows with log. Such adders are variously referred to as tree adders or parallel prefix adders. Many parallel prefix networks have been described in the literature, especially in the context of addition.
The basic components of adders can be designed in many ways. Initially, the combinational delay and functionality can be verified using HDLs, and optimization can be seen at architecture level. At second level, optimization can also be achieved by using specific logic families in the design. In this paper, adder components are designed, analyzed, and compared using CMOS gates and transmission gates using nm technology file.
This is a deep submicron technology file. Several variants of the carry look-ahead equations, like Ling carries [ 9 ], have been presented that simplify carry computation and can lead to faster structures.
Most high speed adders depend on the previous carry to generate the present sum. Ling adders [ 8 , 9 ], on the other hand, make use of Ling carry and propagate bits, in order to calculate the sum bit.
As a result, dependency on the previous bit addition is reduced; that is, ripple effect is lowered. This paper provides a comparative study on the implementation of the abovementioned high-speed adders. By designing and implementing high-speed adders, we found that the power consumption and area reduced drastically when the gates were implemented using transmission gates. This is found to happen without compromising on the speed.
Later as an application example such as magnitude comparator is designed using Kogge-Stone Ling adder to verify the efficiency. Adders 2. Carry Look Ahead Adders Consider the -bit addition of two numbers: and.
Design of High-Speed Adders for Efficient Digital Design Blocks
Enhancements[ edit ] Enhancements to the original implementation include increasing the radix and sparsity of the adder. The radix of the adder refers to how many results from the previous level of computation are used to generate the next one. Doing so increases the power and delay of each stage, but reduces the number of required stages. In the so-called sparse Kogge—Stone adder SKA the sparsity of the adder refers to how many carry bits are generated by the carry-tree. Generating every carry bit is called sparsity-1, whereas generating every other is sparsity-2 and every fourth is sparsity The resulting carries are then used as the carry-in inputs for much shorter ripple carry adders or some other adder design, which generates the final sum bits.
HAN CARLSON ADDER PDF