I have a question regarding initialization in VHDL. If I have an entity output port that is initialized to a certain value, but is assigned to a signal that is initialized to a different value, what initial value will the output assume. I mean a situation like the following:
entity TEST_ENTITY is
Port (port0 : out STD_LOGIC := '0');
end TEST_ENTITY;
architecture Behavioral of TEST_ENTITY is
signal signal0 : STD_LOGIC := '1';
begin
port0 <= signal0;
end Behavioral;
I would assume that the initialization value of the signal will take precedence. Is this correct?
There is no precedence here. Signal assignments take at least one delta cycle to pass. So at time 0, port0 will be '0' and signal0 will be '1'. Port0 will become '1' after 1 delta cycle has elapsed.
Related
The documentation in the "Tasks in Toit" section indicates that the language has facilities for asynchronous data exchange between tasks. If I understand correctly, then two classes from the monitor package: Channel and Mailbox provide this opportunity. Unfortunately, I didn't find examples of using these classes, so I ask you to give at least the simplest example of the implementation of two tasks:
One of the tasks is a message generator, for example, it sends
integers or strings to the second task. The second task gets these
numbers or strings. Perhaps in this case the Channel class should
be used.
Each of the two tasks is both a generator and a receiver of messages.
Those. the first task sends a message to the second task and in turn
asynchronously receives the messages generated by the second task.
Judging by the description, the Mailbox class should be used in
this case.
Thanks in advance,
MK
Here's an example of the first part, using Channel. This class is useful if you have a stream of messages for another task.
import monitor
main:
// A channel with a backlog of 5 items. Once the reader is 5 items behind, the
// writer will block when trying to send. This helps avoid unbounded memory
// use by the in-flight messages if messages are being generated faster than
// they are being consumed. Decreasing this will tend to reduce throughput,
// increasing it will increase memory use.
channel := monitor.Channel 5
task:: message_generating_task channel
task:: message_receiving_task channel
/// Normally this could be looking at IO ports, GPIO pins, or perhaps a socket
/// connection. It could block for some unknown time while doing this. In this
/// case we just sleep a little to illustrate that the data arrives at random
/// times.
generate_message:
milliseconds := random 1000
sleep --ms=milliseconds
// The message is just a string, but could be any object.
return "Message creation took $(milliseconds)ms"
message_generating_task channel/monitor.Channel:
10.repeat:
message := generate_message
channel.send message
channel.send null // We are done.
/// Normally this could be looking at IO ports, GPIO pins, or perhaps a socket
/// connection. It could block for some unknown time while doing this. In this
/// case we just sleep a little to illustrate that the data takes a random
/// amount of time to process.
process_message message:
milliseconds := random 1000
sleep --ms=milliseconds
print message
message_receiving_task channel/monitor.Channel:
while message := channel.receive:
process_message message
Here is an example of using Mailbox. This class is useful if you have a task processing requests and giving responses to other tasks.
import monitor
main:
mailbox := monitor.Mailbox
task:: client_task 1 mailbox
task:: client_task 2 mailbox
task --background:: factorization_task mailbox
/// Normally this could be looking at IO ports, GPIO pins, or perhaps a socket
/// connection. It could block for some unknown time while doing this. For
/// this example we just sleep a little to illustrate that the data arrives at
/// random times.
generate_huge_number:
milliseconds := random 1000
sleep --ms=milliseconds
return (random 100) + 1 // Not actually so huge.
client_task task_number mailbox/monitor.Mailbox:
10.repeat:
huge := generate_huge_number
factor := mailbox.send huge // Send number, wait for result.
other_factor := huge / factor
print "task $task_number: $factor * $other_factor == $huge"
// Factorize a number using the quantum computing port.
factorize_number number:
// TODO: Use actual quantum computing instead of brute-force search.
for i := number.sqrt.round; i > 1; i--:
factor := number / i
if factor * i == number:
return factor
// This will yield so the other tasks can run. In a real application it
// would be waiting on an IO pin connected to the quantum computing unit.
sleep --ms=1
return 1 // 1 is sort-of a factor of all integers.
factorization_task mailbox/monitor.Mailbox:
// Because this task was started as a background task (see 'main' function),
// the program does not wait for it to exit so this loop does not need a real
// exit condition.
while number := mailbox.receive:
result := factorize_number number
mailbox.reply result
I'm pretty sure the Mailbox example worked great at the end of March. I decided to check it now and got the error:
In case of Console Toit:
./web.toit:8:3: error: Argument mismatch: 'task'
task --background:: factorization_task mailbox
^~~~
Compilation failed.
In case of terminal:
micrcx#micrcx-desktop:~/toit_apps/Hsm/communication$ toit execute mailbox_sample.toit
mailbox_sample.toit:8:3: error: Argument mismatch: 'task'
task --background:: factorization_task mailbox
^~~~
Compilation failed.
Perhaps this is due to the latest SDK update. Just in case:
Toit CLI:
| v1.0.0 | 2021-03-29 |
What is the correct syntax for initializing a dynamically allocated array in Ada? I have tried this:
type Short_Array is array (Natural range <>) of Short;
Items : access Short_Array;
Items := new Short_Array(1..UpperBound)'(others => 0);
which results in a compiler error - "binary operator expected". And this:
type Short_Array is array (Natural range <>) of Short;
Items : access Short_Array;
Items := new Short_Array(1..UpperBound);
Items.all := (others => 0);
which seems to raise a SEGFAULT surprisingly. Not sure what's going on there but wanted to get the syntax right before I start chasing my tail.
If you are using Ada2012 you can do the following:
type Short_Array is array(Natural range <>) of Short with
Default_Component_Value => 0;
Items : access Short_Array := new Short_Array(1..UpperBound);
The use of default initial values for arrays is explained in section 2.6 of the Ada 2012 Rationale http://www.ada-auth.org/standards/12rat/html/Rat12-2-6.html
Another approach in Ada is to define the record as a discriminant record, with the discriminant determining the size of the array field.
type Items_Record (Size : Natural) is record
-- Some non-array fields of your record
Items : Short_Array(1..Size);
end record;
An instance of the record can then be allocated in an inner block
Get(Items_Size);
declare
My_Record : Items_Record(Size => Items_Size);
begin
-- Process the instance of Items_Record
end;
The record is dynamically allocated on the stack. If the record size is very large you will encounter a stack overflow issue. If not, this works very well. One advantage of this approach is the instance is automatically de-allocated when the end of the block is reached.
The SEGFAULT in your second example comes most certainly from the initialization.
Compare
type Short_Array is array (Natural range <>) of Short;
Items : access Short_Array;
Items := new Short_Array(1..UpperBound);
Items.all := (others => 0);
And this:
type Short_Array is array (Natural range <>) of Short;
Items : access Short_Array;
Items := new Short_Array(1..UpperBound);
for I in 1..UpperBound loop
Items(I) := 0;
end loop;
And play with different values of ulimit -Ss which sets the allowed size of the stack.
The point is that
Items.all := (others => 0);
allocates an array on the stack and copy it in your heap-allocated array. So you think you are working on the heap but still needs a lot of stack. If your array is too big for your ulimit -Ss (or considering both soft and hard limits ulimit -s), you will segfault (or get a STORAGE_ERROR) though you think you are all on the heap.
To mitigate this problem, (though it might not work in every situation, eg if UpperBound is dynamic…), you can compile your code either with:
"-fstack-usage" (to get usage information for each unit)
"-Wstack-usage=2000" (or any limit more accurate to your case, see gnat's documentation for more infos) to yield warnings for functions using too much stack (or having unbounded stack usage).
The second option could have issued a warning and pointed you to your stack overflow.
I'm playing around with ADA, trying to get my grips on it. I still have a hard time figuring out the discriminant part though. I have a task with one discriminant, and I'm trying to pass a duration to it. However it tells me:
package Procedures is
task type WhatchDog(dur : Duration := 1.0) is
entry Reset(start : in Time);
entry Sync(timedOut : out Boolean);
end WhatchDog;
end Procedures;
with Procedures;
procedure Main is
watch : Procedures.WhatchDog(dur => 0.5);
begin
null;
end Main;
Discriminants must have a discrete or access type.
When I change my discriminant type to an access type,
task type WhatchDog(dur : access Duration := 1.0) is
it gives me the following warning:
Expected an access type with designated type "Standard Duration"
Found type universal real
I know there are other ways to realize a constructor, such as creating an entry point. But I would like to know what I'm doing wrong here, and understand the error I'm making.
The google work I've done so far doesn't really shine any light on this, and only makes use of real types which seem to work fine. Here for example:
http://www.adaic.org/resources/add_content/docs/95style/html/sec_6/6-1-3.html
In your attempted workaround you're trying to assign a Duration to an Access. The proper assignment would be, if going that way:
task type WatchDog (Dur : access Duration := new Duration'(1.0)) is
at the price of having an allocation that is never deallocated, that is, a minor memory leak. That could be a problem if you create/destroy many instances of the task type during a long-lived program, but in that case you have to also take care of reaping the tasks (at least in Gnat).
In this case, I would either have a first entry to pass the Duration value to the task, or a value in milliseconds (or whatever is appropriate) using a Natural as discriminant, and converting it inside the task. Certainly it is an itch in the language.
I am not sure this is the intended use of discriminants. Here, duration is really a configuration of your instances, but doesn't impact the layout of the type in memory. So I think it would be better (and certainly more usual) to either have a Configure entry that can be called to do the initial setup, or if this really needs to be given when the instance is created, you could try creating a generic with a formal Duration parameter.
The reason that you cannot assign 1.0 to dur (the access discriminant) is that dur is of a pointer type (anonymous, access to Duration) while 1.0 is a numeric literal. In short, you cannot assign a real value to a pointer variable, only a pointer value to a pointer variable. See Alex's answer for one way to get one.
The type Duration is not a discrete type, i.e. neither integral nor enumerated. But discrete is what a discriminant must be if not of an access type, given discriminants' original intent as outlined by manuBriot.
So, to initialize some aspect of task objects by passing a value for dur, you need pointers. The following small rewrite does not use an allocator (new), it uses names for duration values, of which it then takes the pointer using 'Access.
package Procedures is
Duration_By_Default : aliased constant Duration := 1.0;
task type Whatchdog
(dur : access constant Duration := Duration_By_Default'Access)
is
entry Reset(start : in Time);
entry Sync(timedOut : out Boolean);
end WhatchDog;
end Procedures;
with Procedures;
procedure Main is
Desired_Duration : aliased constant Duration := 0.5;
watch : Procedures.WhatchDog(dur => Desired_Duration'Access);
begin
null;
end Main;
Note the implications of having an access value point to a variable duration, instead of a constant duration as shown here.
Alternatively, perform a computation that produces a Duration value from some integer discriminant, named dur_in_msec, say.
task body Whatchdog is
dur : constant Duration := Duration(Float(dur_in_msec) / 1_000.0);
I'm new to creating a FPGA system to drive an I2C Bus (although I imagine that this problem applies to any FPGA system) using a variety of different modules, and which all use a synchronous reset.
The modules are clocked using a clock divider module that takes the system clock and outputs a lower frequency to the rest of the system.
The problem I'm having is, when the reset signal goes low, the clock divider resets, and therefore the clock that other modules depend on stop - thus the other modules do not register the reset
An obvious solution would be to have an asynchronous reset, however, in Xilinx ISE it doesn't appear to like them and throws a warning saying that this is incompatible with the Spartan-6 FPGA (especially when the code after the asynchronous code IS synchronous, which it is because an I2C bus uses the bus clock to put bits onto the bus).
Another solution would be for the clock divider to simply not be reset-able, thus the clock would never stop and all modules would reset correctly. However this then means that the clock divider registers cannot be initialised/reinitialised to a known state - which I've been told would be a big problem, although I know you can use the := '0'/'1'; operator in simulation, but this does not work once programmed on the actual FPGA(?).
What is the convention for synchronous resets? Are clock generators generally just not reset? Or do they only reset on the instantaneous edge of the reset signal? Or are none of my suggestions a real solution!
I've put in a timing diagram as well as my code to illustrate both what I mean, and to show the code I've been using.
Thanks very much!
David
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.NUMERIC_STD.ALL;
library UNISIM;
use UNISIM.VComponents.all;
ENTITY CLK_DIVIDER IS
GENERIC(INPUT_FREQ : INTEGER;
OUT1_FREQ : INTEGER;
OUT2_FREQ : INTEGER
);
PORT(SYSCLK : IN STD_LOGIC;
RESET_N : IN STD_LOGIC;
OUT1 : OUT STD_LOGIC;
OUT2 : OUT STD_LOGIC);
END CLK_DIVIDER;
architecture Behavioral of Clk_Divider is
constant divider1 : integer := INPUT_FREQ / OUT1_FREQ / 2;
constant divider2 : integer := INPUT_FREQ / OUT2_FREQ / 2;
signal counter1 : integer := 0;
signal counter2 : integer := 0;
signal output1 : std_logic := '0';
signal output2 : std_logic := '0';
begin
output1_proc : process(SYSCLK)
begin
if rising_edge(SYSCLK) then
if RESET_N = '0' then
counter1 <= 0;
output1 <= '1';
else
if counter1 >= divider1 - 1 then
output1 <= not output1;
counter1 <= 0;
else
counter1 <= counter1 + 1;
end if;
end if;
end if;
end process;
output2_proc : process(SYSCLK)
begin
if rising_edge(SYSCLK) then
if RESET_N = '0' then
counter2 <= 0;
output2 <= '1';
else
if counter2 >= divider2 - 1 then
output2 <= not output2;
counter2 <= 0;
else
counter2 <= counter2 + 1;
end if;
end if;
end if;
end process;
OUT1 <= output1;
OUT2 <= output2;
end Behavioral;
Don't generate internal clocks with user logic, but use a device specific PLL/DCM if multiple clocks are really needed. All the user logic running on the derived clocks should then be held in reset until the clocks are stable, and reset for user logic can then be released as required by design. Either synchronous reset or asynchronous reset can be used.
But i this case, probably generate a clock enable signal instead, and assert this enable signal for a single cycle each time update of the signals are required in order to generate whatever protocol is needed, e.g. the I2C protocol with appropriate timing.
Using fewer clocks, combined with synchronous clock enable signals, makes setup for Static Timing Analysis (STA) easier, and also avoid issues with reset synchronization and Clock Domain Crossing (CDC).
A robust way of handling the resets in a system like this is as follows:
Use a DCM/PLL/MMCM in the Xilinx FPGA to process the input system clock and generate all the output clock frequencies you need, bearing in mind for really low frequencies you should use a clock within the specifications of the clock manager and generate a clock enable signal to use in conjunction with it. This can be reset from the host system at start-up or if at any point the input clock is removed and then re-applied.
Invert the LOCKED signal from the clock manager to generate an active high reset when it is in reset or in the process of locking to the input. This should be passed through an SRL16 or SRL32 to delay it. This SRL should be clocked with the output of the PLL after it's been put onto the global clock routing with a BUFG. Use an extra flip-flop after the SRL for improved timing. This signal can then be used as an active high synchronous reset to rest of the logic in the device where it is needed.
If you get timing errors on the clock enable signal because it is high-fanout net the this could also be put through a BUFG to access the fast global clock network to improve timing.
#Stuart Vivian
(this should be posted as comment but I don't have enough reputation points to do so, sorry about that)
Consider using a counter instead of a shift register for delaying resets because if a LUT content is not cleared after loading the bitstream (some FPGA families have this behaviour), the reset signal may bounce, leading to unpredictable results.
hello I am trying to write proof annotations from this function .. this is written using the Spark programming language
function Read_Sensor_Majority return Sensor_Type is
count1:Integer:=0;
count2:Integer:=0;
count3:Integer:=0;
overall:Sensor_Type;
begin
for index in Integer range 1..3 loop
if State(index) = Proceed then
count1:=count1+1;
elsif State (index) = Caution then
count2:=count2+1;
elsif State (index)=Danger then
count3:=count3+1;
end if;
end loop;
if count1>=2 then
overall:=Proceed;
elsif count2>=2 then
overall:=Caution;
elsif count3>=2 then
overall:=Danger;
else
overall:=Undef;
end if;
return overall;
end Read_Sensor_Majority;
begin -- initialization
State:= Sensordata'(Sensor_Index_Type => Undef);
end Sensors;
this is the .ads file
package Sensors
--# own State;
--# initializes State;
is
type Sensor_Type is (Proceed, Caution, Danger, Undef);
subtype Sensor_Index_Type is Integer range 1..3;
procedure Write_Sensors(Value_1, Value_2, Value_3: in Sensor_Type);
--# global in out State;
--# derives State from State ,Value_1, Value_2, Value_3;
function Read_Sensor(Sensor_Index: in Sensor_Index_Type) return Sensor_Type;
--# global in State;
function Read_Sensor_Majority return Sensor_Type;
--# global in State;
--# return overall => (count1>=2 -> overall=Proceed) and
--# (count2>=2 -> overall=Caution) and
--# (count3>=2 -> overall=Danger);
end Sensors;
these are the errors I am getting after examining the file using the spark examiner
Examiner Semantic Error 1 - The identifier count1 is either undeclared or not visible at this point. <b>34:27</b> Semantic Error 1 - The identifier count1 is either undeclared or not visible at this point. Examiner
Sensors.ads:34:27
Semantic Error 1 - The identifier count1 is either undeclared or not visible at this point.
You have to declare identifiers before you can reference them (with some exceptions).
Most important of all, it is a basic principle in both SPARK and Ada that specifications can be processed without any knowledge whatsoever of possible matching implementations.
As neither overall, nor count1, count2 or count3 are declared in the specification, you can't reference them there either.
Two small side notes:
Please use the same identifier style as in the language reference manual. (Leading upper-case, underscores separating words.)
Why is Sensor_Index_Type a subtype of Integer?
Just playing with SPARK myself so this is not a complete answer.
(It's worth mentioning which SPARK as there are different versions and SPARK-2014 seems to be quite a bit different from the others) I currently only have the 2006 edition of Barnes which doesn't cover the latest versions.
The basic problem is quite obvious : having provided an abstract view of the package's state --# own State; in the spec, you cannot then reach in and observe the specifics.
What to do about it is not completely clear to me, but has the outline form:
provide a more abstract form of the postconditions for Read_Sensor_Majority in the specification, and move the concrete form to the body.
One approach adopted in Ada-2012 (I don't know how applicable to Spark-2005) is to provide an additional function in the spec function satisfies_conditions (...) return boolean; which can be called in the annotations, and whose body contains the concrete implementation.
Barnes (ed. above) p.278 does show "proof functions" which can be declared in the annotations for this purpose. Their bodies then have access to the internal state to perform the concrete checks.
Given that you are showing the .ads and .adb files, I note that the proof in the .ads file is referencing items in the body. Could it be that the prover cannot reach into the body and pull these variables? (i.e. an issue of visibility.)
The message:
The identifier count1 is either undeclared or not visible at this point.
Seems to indicate that this is the case.
I don't know SPARK, so that's my best-guess.