top of page
Search
saffrcara

Binary To Bcd Verilog Code



Binary Coded Decimal format is a binary encoding of decimal numbers that represents each decimal digit by a fixed binary number. For example, 42 is represented in BCD format by the binary representations of 4 and 2, as shown above. The BCD format is common in electronic systems where numeric digits are displayed, as well as in systems where the rounding and conversion errors introduced by binary floating point representation and arithmetic are undesirable.


We will focus on designing a conversion circuit that converts a BCD formatted number to to a binary formatted number. I chose to detail this direction of conversion as binary to BCD conversion circuits are easily be found by a quick web search.




Binary To Bcd Verilog Code




We will be designing for the Basys 2 FPGA board which has 8 input switches. We can use the 8 input switches to encode 2 BCD numbers of 4 bits each. We will therefore concern ourselves with designing a circuit to convert a 2 digit BCD number to a 7 bit binary representation (27 = 128 > 99, the largest 2 digit BCD number we can input).


Next we will consider an iterative algorithm that will also convert our 2 BCD digits to a 7-bit binary representation. We will have two 4-bit registers for bcd1 and bcd0, as well as a 7-bit binary answer register, bin.


We define our symbolic states for the FSM operation to be idle, op for operation, and done. The internal signals are declared for registers and their next state logic, for the state (state), converted binary value (bin), iteration variable (n), and BCD inputs (bcd1 & bcd0).


This module takes an input binary vector and converts it to Binary Coded Decimal (BCD). Binary coded decimal is used to represent a decimal number with four bits. This can be used to convert a binary number to a decimal number than can be displayed on a 7-Segment LED display. The algorithm used in the code below is known as a Double Dabble.


Binary coded decimal uses four bits per digit to represent a decimal number. For example the number 159 in decimal takes 12 bits to represent. This is useful for applications that interface to 7-Segment LEDs, among other things. The reason for this is that each 7-Segment display is treated individually (each gets 4 bits of the 12 bit number in the example above). The FPGA designer needs to know how to drive each digit, and uses BCD to do this. The table for BCD is below.


The Double Dabble Algorithm is described in detail on the linked Wikipedia page. But basically it takes the input binary number as a start. It shifts it one bit at a time into the BCD output vector. It then looks at each 4-bit BCD digit independently. If any of the digits are greater than 4, that digit is incremented by 3. This loop continues for each bit in the input binary vector. See the image below for a visual depiction of how the Finite State Machine is written.


Shift the binary number left one bit.If 8 shifts have taken place, the BCD number is in the Hundreds, Tens, and Units column.If the binary value in any of the BCD columns is 5 or greater, add 3 to that value in that BCD column.Go to 1.OperationHundredsTensUnitsBinaryHEXFFStart1 1 1 11 1 1 1Example 1: Convert hex E to BCDExample 2: Convert hex FF to BCDTruth table for Add-3 ModuleHere is a Verilog module for this truth table.module add3(in,out);input [3:0] in;output [3:0] out;reg [3:0] out;always @ (in)case (in)4'b0000: out


I'm currently intending to modify some existing verilog code (the Abacus project at Digilent -- thanks for that.) In particular, the code that converts binary to BCD. It's based upon the dabble algorithm (easily looked up on Wiki.)


Note that with up to N=3, there's no need for a "dabble" module block. Once 4-bit binary is reached, the first dabble module is required. And this works well up to 6-bit binary. At that point, transitioning to 7-bit, a new layer is required in the tree. (The reason has to do with the number of "S3" outputs, which cannot exceed 3 in the higher order bits -- words are difficult here but the pictures illustrate better.) Then when going from 9-bit binary to 10-bit binary, another transition takes place. This happens on every transition from (N mod 3=0) to (N mod 3=1) boundary.


I can code up, easily, anything I want where I already know the value of N, a priori. And I may be able to adjust the code shown in the Abacus project by changing the "repeat(13)" to "repeat(N-3)" and then also varying the bit positions. But I think I will also need to modify the content within the begin/end block, as well, depending on the depth of the tree. So it starts getting complicated to work out.


I'm wondering if anyone already knows of a good example of a generalized approach to producing arbitrarily complex tree structures in verilog, where the number of bits, N, may have very significant impacts in the numbers of wires and modules, but where it can be parameterized just the same. An example where the number of input bits, N, could be arbitrarily specified, and the number of needed (and varying) output bits would be automatically produced and the necessary tree structures also instanced, and where a top-level module would then be able to ignore (or map) those wires to what's available as output pins could be performed.


Is there an approach within verilog that allows for this kind of generality with respect to tree structures? (It's trivial if all I'm dealing with is arrays and I don't need help there.) It's the tree structure stuff where the nesting depth varies that is giving me fits, right now. I'm honestly not entirely sure how to proceed (other than some long code block that I'll need to continue to modify in order to support larger and larger N when I feel the need for it) and my google skills appear to be coming up a little short, tonight. The subtle construction of wires is bothering me, but I have a hunch that there's something simple I'm missing in writing the code that once someone shows me I'll immediately feel dumb for not seeing right away. I won't mind a kick in the head, if that's needed.


Now, I already know how to do this without these optimizations. The left-to-right number of device columns is N-3. (Feel free to check me out, using the diagrams below.) If I happen to already know (and I can compute it) the number of instantiations required for the column that is furthest to the right, then I can just make every column carry that many instances. They don't all need them. But that doesn't matter much. It still works okay. Then I just use repeat(N-3), in order to construct each column in sequence, with a module that generates the final column's needed number of instances. (7, in the case of the 24-bit binary input.) This will, in fact, work every time. I have to create a much larger bit array to start with (on the left side) with a lot more 0's in the upper bits. But the process "just works." The reason I'm looking to work out how to make a tree that already does its own optimization is for two reasons:


EDIT: I added a few 24-bit binary to 32-bit BCD schematics to illustrate where this is all headed towards. I'd like to be able to generate any and all of these from a single module, parameterized by N, where N >= 1. I wouldn't mind parameterizing both N and M (where M is the number of output bits), as I can compute M from N from the top level that instantiates the generic module. I'm just curious how I might implement such a generic, given N and M. (I've also added yet another 24-bit binary to 32-bit BCD schematic to illustrate the addition of a backward-pruning method recognizing output bits that must be 0 and can therefore generate a further optimized final version. (This final version would be my end goal, after first figuring out how to generate the not-so-optimized tree structured version that precedes it.)


We have been given task to write a 8-bit binary to bcd converter Verilog code, using structural code NOT behavioural, is it possible to guide us how can we create 8-bit binary to bcd converter using two 4-bit converter instances ?


Continue and implement a binary to BCD conversion - you could do this as a function, which because it is used in an initial block shouldn't infer any hardware assuming it is something the synthesizer can calculate. Or


Alternatively if you are feeling adventurous, you can create a script (e.g. TCL) which creates the values and generates either the a Verilog file containing the code for the whole LUT including initialation, or creates a memory initialisation file which you can load into your LUT. This option is not really useful for this particular example, but great for more complex ones, perhaps save it for once you've gained more experience.


In computer science, the double dabble algorithm is used to convert binary numbers into binary-coded decimal (BCD) notation.[1][2] It is also known as the shift-and-add-3 algorithm, and can be implemented using a small number of gates in computer hardware, but at the expense of high latency.[3]


Suppose the original number to be converted is stored in a register that is n bits wide. Reserve a scratch space wide enough to hold both the original number and its BCD representation; n + 4ceil(n/3) bits will be enough. It takes a maximum of 4 bits in binary to store each decimal digit.


The algorithm then iterates n times. On each iteration, any BCD digit which is at least 5 (0101 in binary) is incremented by 3 (0011); then the entire scratch space is left-shifted one bit. The increment ensures that a value of 5, incremented and left-shifted, becomes 16 (10000), thus correctly "carrying" into the next BCD digit.


The algorithm is fully reversible. By applying the reverse double dabble algorithm a BCD number can be converted to binary. Reversing the algorithm is done by reversing the principle steps of the algorithm:


In the 1960s, the term double dabble was also used for a different mental algorithm, used by programmers to convert a binary number to decimal. It is performed by reading the binary number from left to right, doubling if the next bit is zero, and doubling and adding one if the next bit is one.[5] In the example above, 11110011, the thought process would be: "one, three, seven, fifteen, thirty, sixty, one hundred twenty-one, two hundred forty-three", the same result as that obtained above. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comments


bottom of page