fpga programming using verilog - case

Case statement in verilog. I don't understand how this code works
if(cpld_cs & cpld_we)
begin
case(ifc_a27_31)
`RSTCON1: begin
sw_rst_r <= ifc_ad0_7[0];
ddr_rst_r <= ifc_ad0_7[1];
ec1_rst_r <= ifc_ad0_7[2];
ec2_rst_r <= ifc_ad0_7[3];
xgt1_rst_r <= ifc_ad0_7[6];
xgt2_rst_r <= ifc_ad0_7[7];

Just look up documentation. I am no verilog expert but checking documentation you can get that
case(ifc_a27_31)
`RSTCON1: begin
is just simple case where if value of ifc_a27_31 is RSTCON1 then commands
sw_rst_r <= ifc_ad0_7[0];
ddr_rst_r <= ifc_ad0_7[1];
ec1_rst_r <= ifc_ad0_7[2];
ec2_rst_r <= ifc_ad0_7[3];
xgt1_rst_r <= ifc_ad0_7[6];
xgt2_rst_r <= ifc_ad0_7[7];
are getting executed.
And of course
sw_rst_r <= ifc_ad0_7[0];
is just non-blocking assignment.
Information I took from Case Statement and What is the difference between = and <= in verilog?

verilog case syntax consists of a case expression or selector expression (ifc_a37_31) and case items with label expression (macro RSTCON1 in your case) and action items. When afr_a37_31 matches the value of the macro, the statements in the begin .. end block will be executed sequentially.
The case statement might have multiple case items, the first one which matches the selector will be active and its block will be executed.
There is also a default clause which will get executed if no matches are found.
Now in your case it looks like this is a part of a latch or a flop definition, since 'non-blocking' assignments are used there. It is ok to miss some conditions and/or the default statement in such a case.
you might see other variants of the case statement, like casex or casez. Syntax for all of them is similar, the difference is in the ways the selector is compared to the label.
in system verilog there are more, like unique of priority cases or case inside.
So, you need to go through a tutorial to get more information about all this.

Related

How to check for potential overflow in Ada when dealing with expression?

I am relatively new to Ada and have been using Ada 2005. However, I feel like this question is pertinent to all languages.
I am currently using static analysis tools such as Codepeer to address potential vulnerabilities in my code.
One problem I'm debating is how to handle checks before assigning an expression that may cause overflow to a variable.
This can be explained better with an example. Let's say I have a variable of type unsigned 32-bit integer. I am assigning an expression to this variable CheckMeForOverflow:
CheckMeForOverflow := (Val1 + Val2) * Val3;
My dilemma is how to efficiently check for overflow in cases such as this - which would seem to appear quite often in code. Yes, I could do this:
if ((Val1 + Val2) * Val3) < Unsigned_Int'Size then
CheckMeForOverflow := (Val1 + Val2) * Val3;
end if;
My issue with this is that this seems inefficient to check the expression and then immediately assign that same expression if there is no potential for overflow.
However, when I look online, this seems to be pretty common. Could anyone explain better alternatives or explain why this is a good choice? I don't want this scattered throughout my code.
I also realize I could make another variable of a bigger type to hold the expression, do the evaluation against the new variable, and then assign that variable's value to CheckMeForOverflow, but then again, that would mean making a new variable and using it just to perform a single check and then never using it again. This seems wasteful.
Could someone please provide some insight?
Thanks so much!
Personally I would do something like this
begin
CheckMeForOverflow := (Val1 + Val2) * Val3;
exception
when constraint_error =>
null; -- or log that it overflowed
end;
But take care that your variable couldn't have a usable value.
It's clearer than an if construct and we don't perform the calculation twice.
This is exactly the problem SPARK can help solve. It allows you to prove you won't have runtime errors given certain assumptions about the inputs to your calculations.
If you start with a simple function like No_Overflow in this package:
with Interfaces; use Interfaces;
package Show_Runtime_Errors is
type Unsigned_Int is range 0 .. 2**32 - 1;
function No_Overflow (Val1, Val2, Val3 : Unsigned_Int) return Unsigned_Int;
end Show_Runtime_Errors;
package body Show_Runtime_Errors is
function No_Overflow (Val1, Val2, Val3 : Unsigned_Int) return Unsigned_Int is
Result : constant Unsigned_Int := (Val1 + Val2) * Val3;
begin
return Result;
end No_Overflow;
end Show_Runtime_Errors;
Then when you run SPARK on it, you get the following:
Proving...
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
show_runtime_errors.adb:4:55: medium: range check might fail (e.g. when Result = 10)
show_runtime_errors.adb:4:55: medium: overflow check might fail (e.g. when
Result = 9223372039002259450 and Val1 = 4 and Val2 = 2147483646 and
Val3 = 4294967293)
gnatprove: unproved check messages considered as errors
exit status: 1
Now if you add a simple precondition to No_Overflow like this:
function No_Overflow (Val1, Val2, Val3 : Unsigned_Int) return Unsigned_Int with
Pre => Val1 < 2**15 and Val2 < 2**15 and Val3 < 2**16;
Then SPARK produces the following:
Proving...
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
Success!
Your actual preconditions on the ranges of the inputs will obviously depend on your application.
The alternatives are the solution you are assuming where you put lots of explicit guards in your code before the expression is evaluated, or to catch runtime errors via exception handling. The advantage of SPARK over these approaches is that you do not need to build your software with runtime checks if you can prove ahead of time there will be no runtime errors.
Note that preconditions are a feature of Ada 2012. You can also use pragma Assert throughout your code which SPARK can take advantage of for doing proofs.
For more on SPARK there is a tutorial here:
https://learn.adacore.com/courses/intro-to-spark/index.html
To try it yourself, you can paste the above code in the example here:
https://learn.adacore.com/courses/intro-to-spark/book/03_Proof_Of_Program_Integrity.html#runtime-errors
Incidentally, the code you suggested:
if ((Val1 + Val2) * Val3) < Unsigned_Int'Size then
CheckMeForOverflow := (Val1 + Val2) * Val3;
end if;
won't work for two reasons:
Unsigned_Int'Size is the number of bits needed to represent Unsigned_Int. You likely wanted Unsigned_Int'Last instead.
((Val1 + Val2) * Val3) can overflow before the comparison to Unsigned_Int'Last is even done. Thus you will generate an exception at this point and either crash or handle it in an exception handler.

wire in always block/case statement - Verilog

Following is a sample code that uses case statement and always #(*) block. I don't get how the always block is triggered and why it works even when x is declared as wire.
wire [2:0] x = 0;
always #(*)
begin
case (1'b1)
x[0]: $display("Bit 0 : %0d",x[0]);
x[1]: $display("Bit 1 : %0d",x[1]);
x[2]: $display("Bit 2 : %0d",x[2]);
default: $display("In default case");
endcase
end
Any help is appreciated.
Thanks.
As we know, reg can be driven by a wire, we can definitely use a wire as the right hand side of the assignment in any procedural block.
Here, your code checks which bit of x is 1'b1 (of course giving priority to zeroth bit). Lets say x changes to 3'b010. Then, Bit 1 shall be displayed and so on. Now, if x=3'b011 then Bit 0 is displayed since zeroth bit is checked first.
As you can see, there is no assignment to x, the procedural block only reads its value. Moreover, the system task $display also reads the value of x.
There is no change of signal value from this block. Hence, this code works fine. If, by chance, we had something like x[0] = ~x[0] instead of $display, then this code shall provide compilation issues.
More information can be found at this and this links.
Here, this always block does not assign a value to a x, but it just checks a value of x. So it's a legal use of wire.
So, the explanation to the part of your question about how always #(*) is triggered is as follows :
"Nets and variables that appear on the right-hand side of assignments, in subroutine calls, in case and conditional expressions, as an index variable on the left-hand side of assignments, or as variables in case item expressions shall all be included in always #(*)."
Ref: IEEE Std 1800-2012 Sec 9.4.2.2
As an extension of #sharvil111's answer, if your code was something like this
always #(*)
begin
case (sel)
x[0]: $display("Bit 0 : %0d",x[0]);
x[1]: $display("Bit 1 : %0d",x[1]);
x[2]: $display("Bit 2 : %0d",x[2]);
default: $display("In default case");
endcase
end
The procedural block would be triggered whenever there is a change in sel signal or x i.e. it would be equivalent to always #(sel or x).

VHDL OR logic with 32 bit vector

zero <= result_i(31) OR result_i(30) OR result_i(29) OR result_i(28)
OR result_i(27) OR result_i(26) OR result_i(25) OR result_i(24)
OR result_i(23) OR result_i(22) OR result_i(21) OR result_i(20)
OR result_i(19) OR result_i(18) OR result_i(17) OR result_i(16)
OR result_i(15) OR result_i(14) OR result_i(13) OR result_i(12)
OR result_i(11) OR result_i(10) OR result_i(9) OR result_i(8)
OR result_i(7) OR result_i(6) OR result_i(5) OR result_i(4)
OR result_i(3) OR result_i(2) OR result_i(1) OR result_i(0);
How can I make this shorter?
I am assuming you are using std_logic/std_logic_vector types.
Then you can use or_reduce from ieee.std_logic_misc.
library ieee;
use ieee.std_logic_misc.or_reduce;
...
zero <= or_reduce(result_i);
Or write your own function:
function or_reduce(vector : std_logic_vector) return std_logic is
variable result : std_logic := '0';
begin
for i in vector'range loop
result := result or vector(i);
end loop
return result;
end function;
A general tip if you are just starting out with VHDL is to not forget about functions and procedures. Unlike Verilog (Without SystemVerilog) VHDL has good support for writing clean and high level code, even for synthesis, using functions and procedures. If you are doing something repetitive it is a sure sign that it should be wrapped in a function/procedure. In this case there already was a standard function ready to be used though.
You might also want to consider pipelining the or-reduction and inserting flip-flops between the stages. Maybe the 32-bit reduction that you use in your example should still run a reasonably high frequency in an FPGA device but if you are going to use more bits or target a really high frequency you might want to use an or-tree where no more than 6-8 bits are or:ed in each pipeline stage. You can still re-use the or_reduce function for the intermediate operations though.
You can achieve it with vhdl revision 2008
VHDL-2008 defines unary operators, like these:
outp <= and "11011";
outp <= xor "11011";
So in your case it would be:
zero <= or result_i;

Associative array element accessing in comb vs sequential

I was trying to write a test-bench code which used an associative array, and was seeing that in one case accessing its values wasn't working as a comb logic, but when moved inside a sequential block it was working fine.
Example code :
Here "value" was getting assigned as "x" always, but once I moved it inside the #posedge block, I was seeing it assigned the right value (1 once "dummy" got assigned).
Can someone explain why this is so ?
logic dummy[logic[3:0]];
logic value;
always # (posedge clk)
begin
if (reset == 1'b1) begin
count <= 0;
end else if ( enable == 1'b1) begin
count <= count + 1;
end
if(enable) begin
if(!dummy.exists(count))
begin
dummy[count] = 1;
$display (" Setting for count = %d ", count);
end
end
end
always_comb begin
if(dummy.exists(count)) begin
value = dummy[count];
$display("Value = %d",value);
end else begin // [Update : 1]
value = 0;
end
end
[UPDATE : 1 - code updated to have else block]
The question is a bit misleading, actually the if(dummy.exist(count)) seems to be failing when used inside comb logic, but passes when inside seq logic (and since "value" is never assigned in this module, it goes to "x" in my simulation - so edited with an else block) - but this result was on VCS simulator.
EDA-playground link : http://www.edaplayground.com/x/6eq
- Here it seems to be working as normally expected i.e if(dummy.exists(count)) is passing irrespective of being inside always_comb or always #(posedge)
Result in VCS :
[when used as comb logic - value never gets printed]
Value = 0
Applying reset Value = 0
Came out of Reset
Setting for count = 0
Setting for count = 1
Setting for count = 2
Setting for count = 3
Setting for count = 4
Terminating simulation
Simulation Result : PASSED
And value gets printed as "1" when the if(dummy.exist(count)) and assignment is moved inside seq block.
Your first always block contains both blocking and non-blocking assignments, which VCS may be allowing because the always keyword used to be able to specify combinational logic in verilog (via always #(*)). This shouldn't account for the error, but is bad style.
Also the first line of your program is strange, what are you trying to specify? Value is a bit, but dummy is not, so if you try doing dummy[count] = 1'b1, you'll also pop out an error (turn linting on with +lint=all). If you're trying to make dummy an array of 4 bit values, your syntax is off, and then value has the wrong size as well.
Try switching the first always to an explicit always_ff, this should give you a warning/error in VCS. Also, you can always look at the waveform, compile with +define+VPD and use gtkwave (freeware). This should let you see exactly what's happening.
Please check your VCS compilation message and see if there is any warning related to SV new always_comb statement. Some simulators might have issues with the construct or do not support that usage when you inferred "dynamic types" in the sensitivity list. I tried with Incisiv (ncverilog) and it is also OK.

Why does this recursion MyFunc[n_] := MyFunc[n] = 2; end?

I don't understand, why does this recursion end:
In[27]:= MyFunc[n_] := MyFunc[n] = 2;
MyFunc[3]
Out[28]= 2
Shouldn't it be endless
MyFunc[3]
MyFunc[3] = 2
(MyFunc[3] = 2) = 2
and so on?
Why does this
MyFunc[n_] := MyFunc[n];
MyFunc[3]
During evaluation of In[31]:= $IterationLimit::itlim: Iteration limit of 4096 exceeded. >>
Out[33]= Hold[MyFunc[3]]
cause "iteration" limit error, not recursion limit?
My other answer glossed over some important details. Here's a second and, I hope, better one:
SetDelayed has the attribute HoldAll while Set has the attribute HoldFirst. So, your definition
MyFunc[n_] := MyFunc[n] = 2;
is stored with no part evaluated. Only when you call it, eg MyFunc[3] is the rhs evaluated, in this case to an expression involving Set, MyFunc[3] = 2. Since Set has attribute HoldFirst this rule is stored with its first argument (ie the lhs) unevaluated. At this stage MyFunc[3], the lhs of the Set expression is not re-evaluated. But if it were, Mathematica would find the rule MyFunc[3] = 2 and evaluate MyFunc[3] to 2 without using the rule with lhs MyFunc[n_].
Your second definition, ie
MyFunc[n_] := MyFunc[n];
is also stored unevaluated. However, when you call the function, eg myFunc[3], the rhs is evaluated. The rhs evaluates to MyFunc[3] or, if you like, another call to MyFunc. During the evaluation of MyFunc[3] Mathematica finds the stored rewrite rule MyFunc[n_] := MyFunc[n] and applies it. Repeatedly. Note that Mathematica regards this as iteration rather than recursion.
It's not entirely clear to me what evaluating the lhs of an expression might actually mean. Of course, a call such as MyFunc[3+4] will actually lead to MyFunc[7] being evaluated, as Mathematica greedily evaluates arguments to function calls.
In fact, when trying to understand what's going on here it might be easier to forget assignment and left- and right-hand sides and remember that everything is an expression and that, for example,
MyFunc[n_] := MyFunc[n] = 2;
is just a way of writing
SetDelayed[MyFunc[n_], MyFunc[n] = 2]

Resources