In VHDL, if you want to increment a std_logic_vector that represents a real number by one, I have come across a few options.
1) Use typecasting datatype conversion functions to change the std_logic vector to a signed or unsigned value, then convert it to an integer, add one to that integer, and convert it back to a std_logic_vector the opposite way than before. The chart below is handy when trying to do this.
2) Check to see the value of the LSB. If it is a '0', make it a '1'. If it is a '1', do a "shift left" and concatenate a '0' to the LSB. Ex: (For a 16 bit vector) vector(15 downto 1) & '0';
In an FPGA, as compared to a microprocessor, physical hardware resources seem to be the limiting factor instead of actual processing time. There is always the risk that you could run out of physical gates.
So my real question is this: which one of these implementations is "more expensive" in an FPGA and why? Are the compilers robust enough to implement the same physical representation?
"01001"
, for example? – Landwaiter