I am trying to create a few constants and assign hex numbers to them; however, I keep getting errors.
I want the constant FOO_CONST to be equal to 0x38
Like this...
constant FOO_CONST : integer := x"38";
The error:
Type integer does not match with a string literal
I've tried a few variants with no success.
You can specify a base for integers by using the format base#value#:
constant FOO_CONST : integer := 16#38#;
In general, you can use literals in expressions as follows:
Numeric literals may be expressed in any base from 2 to 16. They may also be broken up using underscore, for clarity.
FOO_CONST_HEX <= 16#FF#;
FOO_CONST_BIN <= 2#1010_1010#;
FOO_CONST_BROKEN := 1_000_000.0; -- breaking the number using _
To answer the question clearly, you can do as Erasmus Cedernaes suggested:
constant FOO_CONST: integer:= 16#38#;
OR
constant FOO_CONST : std_logic_vector := X"38"; -- if you will convert it to a std_logic_vector later
Literals for arrays of characters, such as string, bit_vector and std_logic_vector are placed in double quotes:
constant FLAG :bit_vector(0 to 7) := "11111111";
constant MSG : string := "Hello";
Numeric literals with a decimal point are real, those without are integer;
constant FREEZE : integer := 32;
constant TEMP : real := 32.0;
Real numbers may be expressed in exponential form:
FACTOR := 2.2E-6;
Literals of type time (and other physical types) must have units. The units should be preceded by a space, although some tools may not require this:
constant DEL1 :time := 10 ns;
constant DEL2 :time := 2.27 us;
Literals of enumerated types may either be characters (as for bit and std_logic), or identifiers:
type MY_LOGIC is ('X','0','1','Z');
type T_STATE is (IDLE, READ, END_CYC);
signal CLK : MY_LOGIC := '0';
signal STATE : T_STATE := IDLE;
Bit vector literals may be expressed in binary (default), octal or hex. They may also contain embedded underscores for clarity. These forms may not be used as std_logic_vector literals:
BIT_8_BUS <= B"1111_1111";
BIT_9_BUS <= O"353";
BIT_16_BUS <= X"AA55";
Notice that:
Literals are supported for synthesis, providing they are of a type
acceptable to the logic synthesis tool. They are either synthesized as
connections to logic '1' or '0' or are used to help minimize the
number of gates required.
Reference
Related
While writing bindings for interfacing with C code, I am finding issues translating the numerous instances of structs with flexible array members to Ada, as such
struct example {
size_t length;
int body[];
};
I've been told for Ada that behaviors like that can be replicated with discriminated types, but I cannot find a way to use the length field as a discriminant, while maintaining the layout of the structure so that the records can be used to interface with the C code, something like
type Example (Count : Integer := Length) is record
Length : Unsigned_64;
Body : Integer (1 .. Count);
end record;
Is there any way to create a type like that with that array? I've been defaulting for now to grabbing the address of that location and declaring the array myself on place for use, is there any better way? Thanks in advance.
Here is an example in which Ada code calls C code, passing objects of this kind from Ada to C and from C to Ada. The C header is c_example.h:
typedef struct {
size_t length;
int body[];
} example_t;
extern void example_print (example_t *e /* in */);
// Print the contents of e.
extern example_t * example_get (void);
// Return a pointer to an example_t that remains owned
// by the C part (that is, the caller can use it, but should
// not attempt to deallocate it).
The C code is c_example.c:
#include <stdio.h>
#include "c_example.h"
void example_print (example_t *e /* in */)
{
printf ("C example: length = %zd\n", e->length);
for (int i = 0; i < e->length; i++)
{
printf ("body[ %d ] = %d\n", i, e->body[i]);
}
} // example_print
static example_t datum = {4,{6,7,8,9}};
example_t * example_get (void)
{
return &datum;
} // example_get
The C-to-Ada binding is defined in c_binding.ads:
with Interfaces.C;
package C_Binding
is
pragma Linker_Options ("c_example.o");
use Interfaces.C;
type Int_Array is array (size_t range <>) of int
with Convention => C;
type Example_t (Length : size_t) is record
Bod : Int_Array(1 .. Length);
end record
with Convention => C;
type Example_ptr is access all Example_t
with Convention => C;
procedure Print (e : in Example_t)
with Import, Convention => C, External_Name => "example_print";
function Get return Example_Ptr
with Import, Convention => C, External_Name => "example_get";
end C_Binding;
The test main program is flexarr.adb:
with Ada.Text_IO;
with C_Binding;
procedure FlexArr
is
for_c : constant C_Binding.Example_t :=
(Length => 5, Bod => (55, 66, 77, 88, 99));
from_c : C_Binding.Example_ptr;
begin
C_Binding.Print (for_c);
from_c := C_Binding.Get;
Ada.Text_IO.Put_Line (
"Ada example: length =" & from_c.Length'Image);
for I in 1 .. from_c.Length loop
Ada.Text_IO.Put_Line (
"body[" & I'Image & " ] =" & from_c.Bod(I)'Image);
end loop;
end FlexArr;
I build the program thusly:
gcc -c -Wall c_example.c
gnatmake -Wall flexarr.adb
And this is the output from ./flexarr:
C example: length = 5
body[ 0 ] = 55
body[ 1 ] = 66
body[ 2 ] = 77
body[ 3 ] = 88
body[ 4 ] = 99
Ada example: length = 4
body[ 1 ] = 6
body[ 2 ] = 7
body[ 3 ] = 8
body[ 4 ] = 9
So it seems to work. However, the Ada compiler (gnat) gives me some warnings from the C_Binding package:
c_binding.ads:14:12: warning: discriminated record has no direct equivalent in C
c_binding.ads:14:12: warning: use of convention for type "Example_t" is dubious
This means that while this interfacing method works with gnat, it might not work with other compilers, for example Ada compilers that allocate the Bod component array separately from the fixed-size part of the record (whether such a compiler would accept Convention => C for this record type is questionable).
To make the interface more portable, make the following changes. In C_Binding, change Length from a discriminant to an ordinary component and make Bod a fixed-size array, using some maximum size, here 1000 elements as an example:
type Int_Array is array (size_t range 1 .. 1_000) of int
with Convention => C;
type Example_t is record
Length : size_t;
Bod : Int_Array;
end record
with Convention => C;
In the test main program, change the declaration of for_c to pad the array with zeros:
for_c : constant C_Binding.Example_t :=
(Length => 5, Bod => (55, 66, 77, 88, 99, others => 0));
For speed, you could instead let the unused part of the array be uninitialized: "others => <>".
If you cannot find a reasonably small maximum size, it should be possible to define the C binding to be generic in the actual size. But that is getting rather messy.
Note that if all the record/struct objects are created on the C side, and the Ada side only reads and writes them, then the maximum size defined in the binding is used only for index bounds checking and can be very large without impact on the memory usage.
In this example I made the Ada side start indexing from 1, but you can change it to start from zero if you want to make it more similar to the C code.
Finally, in the non-discriminated case, I recommend making Example_t a "limited" type ("type Example_t is limited record ...") so that you cannot assign whole values of that type, nor compare them. The reason is that when the C side provides an Example_t object to the Ada side, the actual size of the object may be smaller than the maximum size defined on the Ada side, but an Ada assignment or comparison would try to use the maximum size, which could make the program read or write memory that should not be read or written.
The discriminant is itself a component of the record, and has to be stored somewhere.
This code,
type Integer_Array is array (Natural range <>) of Integer;
type Example (Count : Natural) is record
Bdy : Integer_Array (1 .. Count);
end record;
compiled with -gnatR to show representtion information, says
for Integer_Array'Alignment use 4;
for Integer_Array'Component_Size use 32;
for Example'Object_Size use 68719476736;
for Example'Value_Size use ??;
for Example'Alignment use 4;
for Example use record
Count at 0 range 0 .. 31;
Bdy at 4 range 0 .. ??;
end record;
so we can see that GNAT has decided to put Count in the first 4 bytes, just like the C (well, this is a common C idiom, so I suppose it’s defined behaviour for C struct components to be allocated in source order).
Since this is to be used for interfacing with C, we could say so,
type Example (Count : Natural) is record
Bdy : Integer_Array (1 .. Count);
end record
with Convention => C;
but as Niklas points out the compiler is doubtful about this (it’s warning you that the Standard doesn’t specify the meaning of the construct).
We could confirm at least that we want Count to be in the first 4 bytes, adding
for Example use record
Count at 0 range 0 .. 31;
end record;
but I don’t suppose that would stop a different compiler using a different scheme (e.g. two structures, the first containing Count and the address of the second, Bdy).
C array indices always start at 0.
If you want to duplicate the C structure remember that the discriminant is a member of the record.
type Example (Length : Integer) is record
body : array(0 .. Length - 1) of Integer;
end record;
I have a problem subtracting a STD_LOGIC_VECTOR from a integer.
This is the code I have right now:
entity ROM is
Port ( hcount: in STD_LOGIC_VECTOR(9 downto 0);
vcount: in STD_LOGIC_VECTOR(9 downto 0);
hpos: in integer;
vpos: in integer;
clk25: in STD_LOGIC;
Pixeldata: out std_logic);
end ROM;
architecture Behavioral of ROM is
signal romtemp : std_logic_vector(9 downto 0);
shared variable yas : integer range 0 to 9 := 0;
shared variable xas : integer range 0 to 9 := 0;
Type RomType is array (9 downto 0) of std_logic_vector(9 downto 0);
Constant Rom: RomType :=
( "0001111000", "0111111110", "0111111110", "1111111111", "1111111111"
, "1111111111", "1111111111", "0111111110", "0111111110", "0001111000");
begin
process(clk25)
begin
if(hpos > hcount - 10) and (hpos <= hcount) and (vpos > vcount - 10) and (vpos <= vcount) then
xas := hpos - to_integer(unsigned(hcount));
end if;
end process;
end Behavioral;
The problem is the following line of code:
xas := hpos - to_integer(unsigned(hcount));
I am trying to put the subtraction in the integer named xas.
The following errors occur on that line:
Error: Multiple declarations of unsigned included via multiple use clauses; none are made directly visible
Error: Expecting type unsigned for < unsigned(hcount) >.
Error: Formal < arg > has no actual or default value.
Error: Type integer is not an array type and cannot be indexed
Error: found '0' definitions of operator "=", cannot determine exact overload matching definition for "-"
Someone that can help me with this error? (I am a beginner in VHDL)
You haven't included your use clauses at the top of the file, but what this error is saying is that from the use clauses, it found two different definitions of unsigned. Because of this, the tool has ignored both definitions, generating an error and forcing you to deal with the problem.
The most likely explanation is that you have:
use ieee.numeric_std.all;
use ieee.std_logic_arith.all;
std_logic_arith is nonstandard, and you should implement your design using the types and functions available in numeric_std only. Remove the std_logic_arith line.
In general, if something is a number, use a numeric type to represent it. For example, your hcount and vcount inputs are clearly counters, and could use type unsigned. If you use more appropriate types in the first place, you avoid the need for awkward looking type conversions, for example:
xas := hpos - to_integer(unsigned(hcount));
would become
xas := hpos - hcount;
Additional problems in your code:
Your process sensitivity list contains only clk25, but the process is not actually a synchronous process, and so all the input signals used should be in the list (or you can use the reserved all keyword to generate an automatic list, i.e. process(all)).
Unless this is some special case, you are better off getting into the habit of writing synchronous processes. These look like this:
process(clk)
begin
if (rising_edge(clk)) then
-- Do things
end if;
end process;
xas is a shared variable, which implies that you might be assigning it in other processes as well. This will probably not work how you expect it to. You should avoid shared variables altogether until you have a good understanding of exactly how they work, and when it might be appropriate to use them.
Are there any branch-less or similar hacks for clamping an integer to the interval of 0 to 255, or a double to the interval of 0.0 to 1.0? (Both ranges are meant to be closed, i.e. endpoints are inclusive.)
I'm using the obvious minimum-maximum check:
int value = (value < 0? 0 : value > 255? 255 : value);
but is there a way to get this faster -- similar to the "modulo" clamp value & 255? And is there a way to do similar things with floating points?
I'm looking for a portable solution, so preferably no CPU/GPU-specific stuff please.
This is a trick I use for clamping an int to a 0 to 255 range:
/**
* Clamps the input to a 0 to 255 range.
* #param v any int value
* #return {#code v < 0 ? 0 : v > 255 ? 255 : v}
*/
public static int clampTo8Bit(int v) {
// if out of range
if ((v & ~0xFF) != 0) {
// invert sign bit, shift to fill, then mask (generates 0 or 255)
v = ((~v) >> 31) & 0xFF;
}
return v;
}
That still has one branch, but a handy thing about it is that you can test whether any of several ints are out of range in one go by ORing them together, which makes things faster in the common case that all of them are in range. For example:
/** Packs four 8-bit values into a 32-bit value, with clamping. */
public static int ARGBclamped(int a, int r, int g, int b) {
if (((a | r | g | b) & ~0xFF) != 0) {
a = clampTo8Bit(a);
r = clampTo8Bit(r);
g = clampTo8Bit(g);
b = clampTo8Bit(b);
}
return (a << 24) + (r << 16) + (g << 8) + (b << 0);
}
Note that your compiler may already give you what you want if you code value = min (value, 255). This may be translated into a MIN instruction if it exists, or into a comparison followed by conditional move, such as the CMOVcc instruction on x86.
The following code assumes two's complement representation of integers, which is usually a given today. The conversion from Boolean to integer should not involve branching under the hood, as modern architectures either provide instructions that can directly be used to form the mask (e.g. SETcc on x86 and ISETcc on NVIDIA GPUs), or can apply predication or conditional moves. If all of those are lacking, the compiler may emit a branchless instruction sequence based on arithmetic right shift to construct a mask, along the lines of Boann's answer. However, there is some residual risk that the compiler could do the wrong thing, so when in doubt, it would be best to disassemble the generated binary to check.
int value, mask;
mask = 0 - (value > 255); // mask = all 1s if value > 255, all 0s otherwise
value = (255 & mask) | (value & ~mask);
On many architectures, use of the ternary operator ?: can also result in a branchless instruction sequences. The hardware may support select-type instructions which are essentially the hardware equivalent of the ternary operator, such as ICMP on NVIDIA GPUs. Or it provides CMOV (conditional move) as in x86, or predication as on ARM, both of which can be used to implement branch-less code for ternary operators. As in the previous case, one would want to examine the disassembled binary code to be absolutely sure the resulting code is without branches.
int value;
value = (value > 255) ? 255 : value;
In case of floating-point operands, modern floating-point units typically provide FMIN and FMAX instructions which map straight to the C/C++ standard math functions fmin() and fmax(). Alternatively fmin() and fmax() may be translated into a comparison followed by a conditional move. Again, it would be prudent to examine the generated code to make sure it is branchless.
double value;
value = fmax (fmin (value, 1.0), 0.0);
I use this thing, 100% branchless.
int clampU8(int val)
{
val &= (val<0)-1; // clamp < 0
val |= -(val>255); // clamp > 255
return val & 0xFF; // mask out
}
For those using C#, Kotlin or Java this is the best I could do, it's nice and succinct if somewhat cryptic:
(x & ~(x >> 31) | 255 - x >> 31) & 255
It only works on signed integers so that might be a blocker for some.
For clamping doubles, I'm afraid there's no language/platform agnostic solution.
The problem with floating point that they have options from fastest operations (MSVC /fp:fast, gcc -funsafe-math-optimizations) to fully precise and safe (MSVC /fp:strict, gcc -frounding-math -fsignaling-nans). In fully precise mode the compiler does not try to use any bit hacks, even if they could.
A solution that manipulates double bits cannot be portable. There may be different endianness, also there may be no (efficient) way to get double bits, double is not necessarily IEEE 754 binary64 after all. Plus direct manipulations will not cause signals for signaling NANs, when they are expected.
For integers most likely the compiler will do it right anyway, otherwise there are already good answers given.
I am new to VHDL and I wanted to ask that what generic term could I use If i wanted to write any size of input vector which could be developed?
GENERIC (n1 : integer);
x:IN BIT_VECTOR(n1-1 downto 0);
Is that a correct example?
Your generic has no default value visible.
Your declaration for x is incomplete. It appears to be an entity declarative item with a mode while you don't have a port declaration.
This VHDL code is syntactically and semantically valid:
entity foo is
generic ( n1: integer);
port (
x: in bit_vector(n1-1 downto 0)
);
end entity;
architecture fum of foo is
begin
end architecture;
It will analyze. It can't be elaborated without the value of n1 being known:
entity foo_tb is
constant N_1: integer := 4;
end entity;
architecture fum of foo_tb is
signal x: bit_vector (N_1-1 downto 0);
begin
DUT:
entity work.foo
generic map (n1 => N_1)
port map ( x => x);
end architecture;
Entity foo by itself can't be the top level of an elaborated model because n1 isn't defined for elaboration.
Entity foo_tb can be elaborated, it uses the constant N_1 to supply a value to n1.
foo_tb can even be simulated, but it will exit immediately because there are no pending signal assignments after initialization.
Neither foo nor foo_tb can be synthesize. foo_tb because it has no ports and any logic in it's design hierarchy would be optimized away as unused. foo because it only has an output and is at best a constant.
If foo had multiple ports, with outputs depending on inputs it would be eligible for synthesis or simulation as long as the generic was defined for elaboration.
(And the moral here is to use a Minimal, Complete, and Verifiable example so someone doesn't have to wave their hands around it's shortcomings).
You can use every term, as far as it's result does not exceed the BIT_VECTORS's array range.
BIT_VECTOR definition: type BIT_VECTOR is array (NATURAL range <>) of BIT;
So your term can have results from 0 to 2**32 - 1
Term examples:
4*n1 - 1 downto 0
n1/4 + 8 downto 0
log2ceilnz(n1) - 1 downto 0
2**n1 - 1 downto 0
According to the " Paebbels" comment I edit this answer :
Every time you want to synthesize your code, synthesis tool should know about the size of parameters you used, Otherwise what exactly you want to synthesize ?!!! (what hardware ?!)
If you want to synthesize your top module code which contains a generic parameter in it's own entity, you can assign it with a default value such as the following code :
ENTITY ... IS
GENERIC(n1 : INTEGER := 8);
PORT(
-- use generic parameter
);
END ENTITY;
Also you can use the generic parameter inside architecture ( size of signals, index of loops, ... ).
I have a microcontroller and I am sampling the values of an LM335 temperature sensor.
The LCD library that I have allows me to display the hexadecimal value sampled by the 10-bit ADC.
10bit ADC gives me values from 0x0000 to 0x03FF.
What I am having trouble is trying to convert the hexadecimal value to a format that can be understood by regular humans.
Any leads would be greatly appreciated, since I am completely lost on the issue.
You could create a "string" into which you construct the decimal number like this (constants depend on what size the value actually, I presume 0-255, whether You want it to be null-terminated, etc.):
char result[4];
char i = 3;
do {
result[i] = '0' + value % 10;
value /= 10;
i--;
}
while (value > 0);
Basically, your problem is how to split a number into decimal digits so you can use your LCD library and send one digit to each cell.
If your LCD is based on 7-segment cells, then you need to output a value from 0 to 9 for each digit, not an ASCII code. The solution by #Roman Hocke is fine for this, provided that you don't add '0' to value % 10
Another way to split a number into digits is to convert it into BCD. For that, there is an algorithm named "double dabble" which allows you to convert your number into BCD without using divisions nor module operations, which can be nice if your microcontroller has no provision for division operation, or this is slower than you need.
"Double dable" algorithm sounds perfect for microcontrollers without provision for the division operation. However, a quick oversight of such algorithm in the Wikipedia shows that it uses dynamic memory, which seems to be worst than a routine for division. Of course, there must be an implementation out there that are not using calls to malloc() and friends.
Just to point out that Roman Hocke's snippet code has a little mistake. This version works ok for decimals in the range 0-255. It can be easily expand it to any range:
void dec2str(uint8_t val, char * res)
{
uint8_t i = 2;
do {
res[i] = '0' + val % 10;
val /= 10;
i--;
} while (val > 0);
res[3] = 0;
}