What is the difference in initializing a vector in Verilog these ways?
reg [3:0] vector = 4'b0000;
or
reg [3:0] vector = 0;
The only advantage I see in the first initialization is setting every bit directly. But in this example setting the vector to zero, is there a difference at all? Is one way better than the other?
EDIT:
I expanded the test program of mcleod_ideafix a bit to check some other things:
module tb;
reg [15:0] v5, v6, v7, v8, v9, v10;
initial begin
v6 = 1'b0;
v7 = 1'b1;
v8 = 2'b1111;
v9 = 'b1;
v10 = 6'b11;
$display ("v5(=/) = %b\nv6(=1'b0) = %b\nv7(=1'b1) = %b\nv8(=2'b1111) = %b\nv9(='b1) = %b\nv10(=6'b11) = %b", v5, v6, v7, v8, v9, v10);
$finish;
end
endmodule
Which results in:
v5(=/) = xxxxxxxxxxxxxxxx
v6(=1'b0) = 0000000000000000
v7(=1'b1) = 0000000000000001
v8(=2'b1111) = 0000000000000011
v9(='b1) = 0000000000000001
v10(=6'b11) = 0000000000000011
v5 shows, that a non initialized variable is x, so undefined as expected.
v6 is set to 1'b0, so only one bit should be set to 0 but all bits are set to 0.
On the other hand v7 is set to 1'b1 and only the least significant bit is set to 1. On my perspective all non given leading bits will set to zeros whenever a variable is set to a value.
v8 shows, that the number before the ' limits the size of the given number behind 'b'.
Setting v8 gives me some warnings:
test.v:7: warning: extra digits given for sized binary constant.
test.v:7: warning: Numeric constant truncated to 2 bits.
According to the given warnings, I don't think, this "feature" is intended, so I would always prefer:
reg [3:0] vector = 2'b10
Removing the length of the number (v9) is more save i think, because it is not possible to forget updating the length when changing the value of the number. The only advantage of writing the length in the number would be a cross-check of the length. So If the length is set smaller as the real length of the value, I get a warning. On the other hand, setting the length longer than the real value doesn't give a warning (v10). So the cross-check doesn't seem to be the intention too?
Back to my starting question: There is no real difference in setting a vector these ways, isn't it? So I would use the "long" form:
reg [3:0] vector = 'b1011;
only to set a reg to a specific value in binary or hex. Setting to a decimal or to zero:
reg [3:0] vector = 0;
is shorter, more intuitive and more save, because I can not forget changing the length.
Only aesthetics could lead someone to write 1'b0.
The second one will treat 0 as a 32-bit integer, so it's equivalent to
reg [3:0] vector = 32'b0000_0000_0000_0000_0000_0000_0000_0000;
For simulation: as long as vector size is no more than 32 bits, it's fine. You may get warnings about operand being truncated to fit. But if vector is greater than 32 bits, bits starting from 32 will be x in your simulations.
EDIT: seems like it is not that way, and 0 can be used to initialize any size. See example below
There is this other notation for which the synthesizer/compiler will expand the operand to fit the L-value:
reg [3:0] vector = 'b0;
For FPGA synthesis, non-initialized vectors will be initialized to the default configured in your synthesizer/bitstream generation process, which is normally 0.
module tb;
reg [3:0] v1, v2, v3;
reg [127:0] v4;
initial begin
v1 = 0;
v2 = 4'b0000;
v3 = 'b0;
v4 = 0;
$display ("%b %b %b %b", v1, v2, v3, v4);
$finish;
end
endmodule
[2020-06-28 09:49:37 EDT] iverilog '-Wall' design.sv testbench.sv && unbuffer vvp a.out
0000 0000 0000 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
Done
A numeric literal with no base or size is treated as a signed, 32-bit value. Verilog allows assignments from any size to any other size, and will implicitly truncate or extend the value silently. This is known as weakly typed.
When you write 2'b0000, you are specifying a 2-bit unsigned binary number, but you wrote out 4 digits. The number will be truncated to 2-bits. But since you are making an assignment to a 4-bit variable, the value hets extended back to 4 bits.
So the end result is the same for each type of initialization.
EDIT:
Your edit changed the question topic slightly, but the answer is still the same for the code in your example—the result is the same. But that is not always the case when literals are used within an expression. Size matters.
The original Question posted this line of code:
reg [3:0] vector = 2'b0000;
You might get a compile warning in your first statement with some simulators because you used 2 as the size of the numerical literal value. You probably meant to use 4:
reg [3:0] vector = 4'b0000;
Aside from that, you should see that your 2 statements will simulate the same way.
Related
How can I convert a z3.String to a sequence of ASCII values?
For example, here is some code that I thought would check whether the ASCII values of all the characters in the string add up to 100:
import z3
def add_ascii_values(password):
return sum(ord(character) for character in password)
password = z3.String("password")
solver = z3.Solver()
ascii_sum = add_ascii_values(password)
solver.add(ascii_sum == 100)
print(solver.check())
print(solver.model())
Unfortunately, I get this error:
TypeError: ord() expected string of length 1, but SeqRef found
It's apparent that ord doesn't work with z3.String. Is there something in Z3 that does?
The accepted answer dates back to 2018, and things have changed in the mean time which makes the proposed solution no longer work with z3. In particular:
Strings are now formalized by SMTLib. (See https://smtlib.cs.uiowa.edu/theories-UnicodeStrings.shtml)
Unlike the previous version (where strings were simply sequences of bit vectors), strings are now sequences unicode characters. So, the coding used in the previous answer no longer applies.
Based on this, the following would be how this problem would be coded, assuming a password of length 3:
from z3 import *
s = Solver()
# Ord of character at position i
def OrdAt(inp, i):
return StrToCode(SubString(inp, i, 1))
# Adding ascii values for a string of a given length
def add_ascii_values(password, len):
return Sum([OrdAt(password, i) for i in range(len)])
# We'll have to force a constant length
length = 3
password = String("password")
s.add(Length(password) == length)
ascii_sum = add_ascii_values(password, length)
s.add(ascii_sum == 100)
# Also require characters to be printable so we can view them:
for i in range(length):
v = OrdAt(password, i)
s.add(v >= 0x20)
s.add(v <= 0x7E)
print(s.check())
print(s.model()[password])
Note Due to https://github.com/Z3Prover/z3/issues/5773, to be able to run the above, you need a version of z3 that you downloaded on Jan 12, 2022 or afterwards! As of this date, none of the released versions of z3 contain the functions used in this answer.
When run, the above prints:
sat
" #!"
You can check that it satisfies the given constraint, i.e., the ord of characters add up to 100:
>>> sum(ord(c) for c in " #!")
100
Note that we no longer have to worry about modular arithmetic, since OrdAt returns an actual integer, not a bit-vector.
2022 Update
Below answer, written back in 2018, no longer applies; as strings in SMTLib received a major update and thus the code given is outdated. Keeping it here for archival purposes, and in case you happen to have a really old z3 that you cannot upgrade for some reason. See the other answer for a variant that works with the new unicode strings in SMTLib: https://stackoverflow.com/a/70689580/936310
Old Answer from 2018
You're conflating Python strings and Z3 Strings; and unfortunately the two are quite different types.
In Z3py, a String is simply a sequence of 8-bit values. And what you can do with a Z3 is actually quite limited; for instance you cannot iterate over the characters like you did in your add_ascii_values function. See this page for what the allowed functions are: https://rise4fun.com/z3/tutorialcontent/sequences (This page lists the functions in SMTLib parlance; but the equivalent ones are available from the z3py interface.)
There are a few important restrictions/things that you need to keep in mind when working with Z3 sequences and strings:
You have to be very explicit about the lengths; In particular, you cannot sum over strings of arbitrary symbolic length. There are a few things you can do without specifying the length explicitly, but these are limited. (Like regex matches, substring extraction etc.)
You cannot extract a character out of a string. This is an oversight in my opinion, but SMTLib just has no way of doing so for the time being. Instead, you get a list of length 1. This causes a lot of headaches in programming, but there are workarounds. See below.
Anytime you loop over a string/sequence, you have to go up to a fixed bound. There are ways to program so you can cover "all strings upto length N" for some constant "N", but they do get hairy.
Keeping all this in mind, I'd go about coding your example like the following; restricting password to be precisely 10 characters long:
from z3 import *
s = Solver()
# Work around the fact that z3 has no way of giving us an element at an index. Sigh.
ordHelperCounter = 0
def OrdAt(inp, i):
global ordHelperCounter
v = BitVec("OrdAtHelper_%d_%d" % (i, ordHelperCounter), 8)
ordHelperCounter += 1
s.add(Unit(v) == SubString(inp, i, 1))
return v
# Your original function, but note the addition of len parameter and use of Sum
def add_ascii_values(password, len):
return Sum([OrdAt(password, i) for i in range(len)])
# We'll have to force a constant length
length = 10
password = String("password")
s.add(Length(password) == 10)
ascii_sum = add_ascii_values(password, length)
s.add(ascii_sum == 100)
# Also require characters to be printable so we can view them:
for i in range(length):
v = OrdAt(password, i)
s.add(v >= 0x20)
s.add(v <= 0x7E)
print(s.check())
print(s.model()[password])
The OrdAt function works around the problem of not being able to extract characters. Also note how we use Sum instead of sum, and how all "loops" are of fixed iteration count. I also added constraints to make all the ascii codes printable for convenience.
When you run this, you get:
sat
":X|#`y}###"
Let's check it's indeed good:
>>> len(":X|#`y}###")
10
>>> sum(ord(character) for character in ":X|#`y}###")
868
So, we did get a length 10 string; but how come the ord's don't sum up to 100? Now, you have to remember sequences are composed of 8-bit values, and thus the arithmetic is done modulo 256. So, the sum actually is:
>>> sum(ord(character) for character in ":X|#`y}###") % 256
100
To avoid the overflows, you can either use larger bit-vectors, or more simply use Z3's unbounded Integer type Int. To do so, use the BV2Int function, by simply changing add_ascii_values to:
def add_ascii_values(password, len):
return Sum([BV2Int(OrdAt(password, i)) for i in range(len)])
Now we'd get:
unsat
That's because each of our characters has at least value 0x20 and we wanted 10 characters; so there's no way to make them all sum up to 100. And z3 is precisely telling us that. If you increase your sum goal to something more reasonable, you'd start getting proper values.
Programming with z3py is different than regular programming with Python, and z3 String objects are quite different than those of Python itself. Note that the sequence/string logic isn't even standardized yet by the SMTLib folks, so things can change. (In particular, I'm hoping they'll add functionality for extracting elements at an index!).
Having said all this, going over the https://rise4fun.com/z3/tutorialcontent/sequences would be a good start to get familiar with them, and feel free to ask further questions.
I've been working on a hex calculator for a while, but seem to be stuck on the subtraction portion, particularly when B>A. I'm trying to simply subtract two positive integers and display the result. It works fine for A>B and A=B. So far I'm able use two 7-segment displays to show the integers to be subtracted and I get the proper difference as long as A>=B
When B>A I see a pattern that I'm not able to debug because of my limited knowledge in Verilog case/if-else statements. Forgive me if I'm not explaining the best way but what I'm observing is that once the first number, A, "reaches" 0 (after being subtracted from) it loops back to F. The remainder of B is then subtracted from F rather than 0.
For example: If A=1, B=3
A - B =
1 - 1 = 0
0 - 1 = F
F - 1 = E
Another example could be 4-8=C
Below are the important snippets of code I've put together thus far.
First, my subtraction statement
always#*
begin
begin
Cout1 = 7'b1000000; //0
end
case(PrintDifference[3:0])
4'b0000 : Cout0 = 7'b1000000; //0
4'b0001 : Cout0 = 7'b1111001; //1
...
4'b1110 : Cout0 = 7'b0000110; //E
4'b1111 : Cout0 = 7'b0001110; //F
endcase
end
My subtraction is pretty straightforward
output [4:0]Difference;
output [4:0] PrintDifference;
assign PrintDifference = A-B;
I was thinking I could just do something like
if A>=B, Difference = B-A
else, Difference = A-B
Thank you everyone in advance!
This is expected behaviour of twos complement addition / subtraction which I would recommend reading up on since it is so essential.
The result obtained can be changed back into an unsigned form by inverting all the bits and adding one. Checking the most significant bit will tell you if the number is negative or not.
I often seen the symbol 1L (or 2L, 3L, etc) appear in R code. Whats the difference between 1L and 1? 1==1L evaluates to TRUE. Why is 1L used in R code?
So, #James and #Brian explained what 3L means. But why would you use it?
Most of the time it makes no difference - but sometimes you can use it to get your code to run faster and consume less memory. A double ("numeric") vector uses 8 bytes per element. An integer vector uses only 4 bytes per element. For large vectors, that's less wasted memory and less to wade through for the CPU (so it's typically faster).
Mostly this applies when working with indices.
Here's an example where adding 1 to an integer vector turns it into a double vector:
x <- 1:100
typeof(x) # integer
y <- x+1
typeof(y) # double, twice the memory size
object.size(y) # 840 bytes (on win64)
z <- x+1L
typeof(z) # still integer
object.size(z) # 440 bytes (on win64)
...but also note that working excessively with integers can be dangerous:
1e9L * 2L # Works fine; fast lean and mean!
1e9L * 4L # Ooops, overflow!
...and as #Gavin pointed out, the range for integers is roughly -2e9 to 2e9.
A caveat though is that this applies to the current R version (2.13). R might change this at some point (64-bit integers would be sweet, which could enable vectors of length > 2e9). To be safe, you should use .Machine$integer.max whenever you need the maximum integer value (and negate that for the minimum).
From the Constants Section of the R Language Definition:
We can use the ‘L’ suffix to qualify any number with the intent of making it an explicit integer.
So ‘0x10L’ creates the integer value 16 from the hexadecimal representation. The constant 1e3L
gives 1000 as an integer rather than a numeric value and is equivalent to 1000L. (Note that the
‘L’ is treated as qualifying the term 1e3 and not the 3.) If we qualify a value with ‘L’ that is
not an integer value, e.g. 1e-3L, we get a warning and the numeric value is created. A warning
is also created if there is an unnecessary decimal point in the number, e.g. 1.L.
L specifies an integer type, rather than a double that the standard numeric class is.
> str(1)
num 1
> str(1L)
int 1
To explicitly create an integer value for a constant you can call the function as.integer or more simply use "L " suffix.
I have to do some bitwise operations to perform collision checking for my game, but, I've stumbled into some hexadecimal notation I don't know.
Example from: http://www.yoyogames.com/tech_blog/7
Using the binary tricks above, we can do a simple AND with Y coordinate
Y = Y & $fffffff0
, and this will rid us of the lower bits making the value a multiple of 16, and placing it outside the collision, and back to 64; since
%1001000 (68) & $fffffff0 = %1000000 (64).
Another formula, from: http://gmc.yoyogames.com/index.php?showtopic=552034
$fffffff0 = 4294967280 = ~$F = ~15
$ffffffe0 = 4294967264 = ~$1F = ~31
What kind of hexadecimal notation is this? What does the '$' mean?
~ is the operator for bitwise operation NOT. What it does is invert all bits. 0 become 1 and 1 become 0.
$ preceeding the value tells the compiler its an hexadecimal number. Without it, fffffff0 would be understood as being a variable name.
So you see while 15 means 15 decimal and f hexadecimal, $15 is 15 hexadecimal and 21 decimal.
I created the following simple matlab functions to convert a number from an arbitrary base to decimal and back
this is the first one
function decNum = base2decimal(vec, base)
decNum = vec(1);
for d = 1:1:length(vec)-1
decNum = decNum*base + vec(d+1);
end
and here is the other one
function baseNum = decimal2base(num, base, Vlen)
ii = 1;
if num == 0
baseNum = 0;
end
while num ~= 0
baseNum(ii) = mod(num, base);
num = floor(num./base);
ii = ii+1;
end
baseNum = fliplr(baseNum);
if Vlen>(length(baseNum))
baseNum = [zeros(1,(Vlen)-(length(baseNum))) baseNum ];
end
Due to the fact that there are limitations to how big a number can be these functions can't successfully convert vary big vectors, but while testing them I noticed the following bug
Let's use the following testing function
num = 201;
pCount = 7
x=base2decimal(repmat(num-1, 1, pCount), num)
repmat(num-1, 1, pCount)
y=decimal2base(x, num, 1)
isequal(repmat(num-1, 1, pCount),y)
A supposed vector with seven (7) digits in base201 works fine, but the same vector with base200 does not return the expected result even though it is smaller and theoretically should be converted successfully.
(One preliminary comment: calling base2decimal won't result in a decimal number but rather in a number :-D)
This is due floating-point limited precision (in our case, double). To test it, just type at the MATLAB Command Window:
>> 200^7 - 1 == 200^7
ans =
1
>> mod(200^7 - 1, 200)
ans =
0
which means that the value of your number in base 200 (which is precisely 2007−1) is represented exactly as 2007, and the "true" value of representation is 2007.
On the other hand:
>> 201^7 - 1 == 201^7
ans =
1
so still the two numbers are represented the same, but
>> mod(201^7 - 1, 201)
ans =
200
which means that the two values share the "true" representation of 2017−1, which, by accident, is the value that you expected.
TL;DR
When stored in a double, 2007−1 is inaccurately represented as 2007, while 2017−1 is accurately represented.
"Bigger numbers are less accurately represented than smaller numbers" is a misconception: if it was true, there would be no big numbers that could be exactly represented.
Judging from your own observations:
The code works fine in most cases
The code can give small errors for large numbers
The suspect is apparent:
Rounding issues seem to give you headaces here. This is also illustrated by #RTL in the comments.
The first question should now be:
1. Do you need perfect accuracy for such large numbers? Or is it ok if it is off by a relatively small amount sometimes?
If that is answered with a yes, I would recommend you to try a different storage format.
The simple solution would be to use big integers:
uint64
The alternative would be to make your own storage format. This is required if you need even bigger numbers. I think you can cover a huge range with a cell array and some tricks, but of course it is going to be hard to combine those numbers afterwards without losing the accuracy that you worked so hard for.