How to read the input without pausing the snake game? - ada

This is an school assignment. To create a snake game.
So I have created two packages one for graphics and one for the snake.
The snake moves smoothly and everything works. But I need to control the snake with the keyboard.This is the main procedure:
with Graphics; use Graphics;
with Graphics.Snake; use Graphics.Snake;
procedure Run_Snake is
B : Buffer (1 .. 24, 1 .. 80);
S : Snake_Type (1 .. 5) := ((10, 10),
(10, 11),
(10, 12),
(11, 12),
(12, 12));
D : Duration := 0.07;
begin
loop
Empty (B);
Draw_Rect (B, (1, 1), Width => 80,
Height => 24);
Draw (B, S);
Update (B);
Move (S, 0, -1);
delay D;
end loop;
end Run_Snake;
in this line of code I controll the snakes head rotation:
Move (S, x, y);
where x is the x value it can be -1 for left, 1 for right.
where y is the y value it can be -1 for down, 1 for up;
Anyways, how can I read the input without pausing the snake from moving?
Thanks

You might want to use a multitasking system to overcome your problem.
procedure Snake_Game is
task Run_Snake is
entry Input_Received (S : Snake_Type; x : Integer; y : Integer);
end Run_Snake;
task body Run_Snake is
D : constant Duration := 0.07;
B : Buffer (1 .. 24, 1 .. 80);
begin
loop
select
accept Input_Received (S : Snake_Type; x : Integer; y : Integer) do
Move (S, x, y);
end;
or
delay D;
Empty (B);
Draw_Rect (B, (1, 1), Width => 80, Height => 24);
Draw (B, S);
Update (B);
end select;
end loop;
end Run_Snake;
S : Snake_Type (1 .. 5) := ((10, 10),
(10, 11),
(10, 12),
(11, 12),
(12, 12));
begin
loop
Do_Whatever_You_Want_To_Get (x, y)
Run_Snake.Input_Received(S, x, y);
end loop;
end Snake_Game;
In such a system, you have your drawing process in a separate task. At the "select" line, the task waits for a call to Input_Received(). If this entry is not called after a duration D, then the task executes all the drawing functions, and the loop begins anew.
Hope it can help.
Cheers.

There are a couple of basic approaches to solving this sort of problem.
The first is to use some kind of non-blocking call that checks for input, but returns immediately whether it finds any input or not. As Simon Wright mentioned in the comments, Ada provides such a call: Get_Immediate. The little nit with this routine is that (when last I checked) most compilers implement it in a way that still requires the user to hit the enter key before their input is available to that routine. Most OS's will have a system call for such an activity too (without that annoying enter key drawback), but blocking is usually their preferred behavior, so setting up non-blocking input is often difficult or arcane.
I don't know what's in that Graphics package of yours, but you may want to look through it to see if it possesses such a call. I'm guessing this is a school programming assignment, and that is a package built to use with this assignment. If so, there's probably some facility in there for this, or else your instructor did not picture your game working that way.
The other approach is to use a different thread of control for reading user input. That way you can use the blocking calls that OS's love in one thread, and let the rest of your game merrily chug away as it should in the other. Ada implements this with tasks (as shown in tvuillemin's answer). The issue here is that any interactions between those two tasks (eg: passing the input to the game task) has to be properly synchronized. Also, any package or facility used by both tasks must be task-safe. For Ada packages you are pretty safe, as long as you don't try to share file objects or something. But for third party packages (like Graphics?) you are generally best just picking one task to "own" that package.
Most windowing systems solve this problem in an incredibly complex way. They implement their own "main loop" that is supposed to take over your thread of control, and it will take care of mundane duties like refreshing the contents of windows and polling for input. If you want to do something custom (eg: update game state periodically), you have to put it in a routine and register it as a "callback".

Related

How to use recursion for a sub set problem in C

I am a beginner trying to teach myself C and I had a problem the other day which I thought would be cool to try and solve with a short program. I found it a bit more difficult to solve than I initially thought. Basically the problem goes like this.
I want to be able to enter a single int value between 0..255 (never outside this range) into a function, and inside the function there is an array of 8 values (1, 2, 4, 8, 16, 32, 64, 128), which can be combined by adding together to get reach the single int value. And then return the different combinations possible. i.e.
Target 192
Returns
64, 128
From what I have read this is a sub set problem and can be solved with recursion, but I am really struggling to put the theory and examples I've found into practice. If someone could help me out or even put me in the right direction to try and solve.
Hint: try the "bitwise and" operator (&)
First of all, it's a good idea to keep I/O and algorithms separated. So you generally shouldn't design functions which take user input and performs some algorithm at the same time.
Next up "can be solved with recursion" is not a goal of it's own. Recursion is dangerous, inefficient and hard to read. There exists very few cases where it should be used in C programming and no cases where beginners should use it at all. Most of the time, recursion in C simply boils down to: "I could paint this barn while standing on my hands at the same time"... well maybe you could, maybe you could do it without risk breaking your neck, maybe you can even do it as quickly as if you were standing upright (not likely), but why would you do it?
Program design aside, the algorithm you are looking for is closely related to binary numbers. Any number in any base can be formed by:
digitn * basen + digitn-1 * basen-1... + digit0 * base0.
In case of binary (base 2) numbers manually, for example 111 can be manually decoded to decimal as:
1 * 22 + 1 * 21 + 1 * 20 = 4 + 2 + 1 decimal = 7 decimal.
Now if we compare this with your algorithm, the multipliers above for base 2 correspond to 1, 2, 4, 8...
Conveniently, all numbers in C are actually raw binary. They only get translated to other bases when doing user input/output. So what you need for your algorithm is simply a way to check individual digits of a binary number are set or not.
This can be done with the & "bitwise AND" and << "bitwise left shift" operators. The bitwise left shift to shift the value 1 left to get the various multipliers: 0b=0, 1b=1, 10b=2, 100b=4 and so on. And then bitwise AND to mask out an individual bit from the rest, to see if it is set or not. If it isn't set, well then by the above formula we get 0*basen for that digit, so it will be zero and can be ignored.
Writing the actual C code for that is actually quite easy:
for(int i=0; i<8; i++)
{
unsigned int mask = 1u << i;
if(mask & number)
{
printf("%u\n", mask);
}
}
(This is using unsigned numbers to avoid various common bugs, but that's a topic of its own.)

MPI - Deal with special case of zero sized subarray for MPI_TYPE_CREATE_SUBARRAY

I use MPI_TYPE_CREATE_SUBARRAY to create a type used to communicate portions of 3D arrays between neighboring processes in a Cartesian topology. Specifically, each process communicates with the two processes on the two sides along each of the three directions.
Referring for simplicity to a one-dimensional grid, there are two parameters nL and nR that define how many values each process has to receive from the left and send to the right, and how many each has to receive from the right and send to the left.
Unaware (or maybe just forgetful) of the fact that all elements of the array_of_subsizes array parameter of MPI_TYPE_CREATE_SUBARRAY must be positive, I wrote my code that can't deal with the case nR = 0 (or nL = 0, either can be).
(By the way, I see that MPI_TYPE_VECTOR does accept zero count and blocklength arguments and it's sad MPI_TYPE_CREATE_SUBARRAY can't.)
How would you suggest to face this problem? Do I really have to convert each call to MPI_TYPE_CREATE_SUBARRAY into multiple MPI_TYPE_VECTORs called in a chain?
The following code is minimal but not working (but it works in the larger program and I haven't had time to extract the minimum number of declarations and prints), still it should give a better look into what I'm talking about.
INTEGER :: ndims = 3, DBS, ierr, temp, sub3D
INTEGER, DIMENSION(ndims) :: aos, aoss
CALL MPI_TYPE_SIZE(MPI_DOUBLE_PRECISION, DBS, ierr)
! doesn't work if ANY(aoss == 0)
CALL MPI_TYPE_CREATE_SUBARRAY(ndims, aos, aoss, [0,0,0], MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, sub3D, ierr)
! does work if ANY(aoss == 0)
CALL MPI_TYPE_HVECTOR(aoss(2), aoss(1), DBS*aos(1), MPI_DOUBLE_PRECISION, temp, ierr)
CALL MPI_TYPE_HVECTOR(aoss(3), 1, DBS*PRODUCT(aos(1:2)), temp, sub3D, ierr)
At the end it wasn't hard to replace MPI_TYPE_CREATE_SUBARRAY with two MPI_TYPE_HVECTORs. Maybe this is the best solution, after all.
In this sense one side question comes naturally for me: why is MPI_TYPE_CREATE_SUBARRAY so limited? There are a lot of examples in the MPI standard of stuff which correctly falls back on "do nothing" (when a sender or receiver is MPI_PROC_NULL) or "there's nothing in this" (when aoss has a zero dimension in my example). Should I post a feature request somewhere?
The MPI 3.1 standard (chapter 4.1 page 95) makes it crystal clear
For any dimension i, it is erroneous to specify array_of_subsizes[i] < 1 [...].
You are free to send your comment to the appropriate Mailing List.

MPI neighbor reduce operation

This is the moment I feel I need something like MPI_Neighbor_allreduce, but I know it doesn't exist.
Foreword
Given a 3D MPI cartesian topology describing how a 3D physical domain is distributed among processes, I wrote a function probe that asks for a scalar value (which is supposed to be put in a simple REAL :: val) given the 3 coordinates of a point inside the domain.
There can only be 1, 2, 4, or 8 process(es) that are actually involved in the computation of val.
1 if the point is internal to a process subdomain (and it has no neighbors involved),
2 if the point is on a face between 2 processes' subdomains (and each of them has 1 neighbor involved),
4 if the point is on a side between 4 processes' subdomains (and each of them has 2 neighbors involved),
8 if the point is a vertex between 8 processes' subdomain (and each of them has 3 neighbors involved).
After the call to probe as it is now, each process holds val, which is some value for involved processes, 0 or NaN (I decide by (de)commenting the proper lines) for not-involved processes. And each process knows if it is involved or not (through a LOGICAL :: found variable), but does not know if it is the only one involved, nor who are the involved neighbors if it is not.
In the case of 1 involved process, that only value of that only process is enough, and the process can write it, use it, or whatever is needed.
In the latter three cases, the sum of the different scalar values of the processes involved must be computed (and divided by the number of neighbors +1, i.e. self included).
The question
What is the best strategy to accomplish this communication and computation?
What solutions I'm thinking about
I'm thinking about the following possibilities.
Every process executes val = 0 before the call to probe, then MPI_(ALL)REDUCE can be used, (the involved processes participating with val /= 0 in general, all others with val == 0), but this would mean that if more points are asked for val, those points would be treated serially, even if the set of process(es) involved for each of them does not overlap with other sets.
Every process calls MPI_Neighbor_allgather to share found among neighboring processes to make each involved process know which one(s) of the 6 neighbors participate(s) to the sum and then perform individual MPI_send(s) and an MPI_recv(s) to communicate val. But this would still involve every process (even though each communicates only with the 6 neighbors.
Maybe the best choice is that each process defines a communicator made up of itself plus the 6 neighbors and then use.
EDIT
For what concerns the risk of deadlock mentioned by #JorgeBellón, I initially solved it by calling MPI_SEND before MPI_RECV for communications in the positive direction, i.e. those corresponding to even indices in who_is_involved, and vice-versa in the negative direction. As a special case, this could not deal with a periodic direction with only two processes along it (since each of the two would see the other one as a neighbor in both positive and negative directions, thus resulting in both processes calling MPI_SEND and MPI_RECV in the same order, thus causing a deadlock); the solution to this special case was the following ad-hoc edit to who_is_involved (which I called found_neigh in my code):
DO id = 1, ndims
IF (ALL(found_neigh(2*id - 1:2*id))) found_neigh(2*id -1 + mycoords(id)) = .FALSE.
END DO
As a reference for the readers, the solution that I implemented so far (a solution I'm not so satisfied with) is the following.
found = ... ! .TRUE. or .FALSE. depending whether the process is/isn't involved in computation of val
IF ( found) val = ... ! compute own contribution
IF (.NOT. found) val = NaN
! share found among neighbors
found_neigh(:) = .FALSE.
CALL MPI_NEIGHBOR_ALLGATHER(found, 1, MPI_LOGICAL, found_neigh, 1, MPI_LOGICAL, procs_grid, ierr)
found_neigh = found_neigh .AND. found
! modify found_neigh to deal with special case of TWO processes along PERIODIC direction
DO id = 1, ndims
IF (ALL(found_neigh(2*id - 1:2*id))) found_neigh(2*id -1 + mycoords(id)) = .FALSE.
END DO
! exchange contribution with neighbors
val_neigh(:) = NaN
IF (found) THEN
DO id = 1, ndims
IF (found_neigh(2*id)) THEN
CALL MPI_SEND(val, 1, MPI_DOUBLE_PRECISION, idp(id), 999, MPI_COMM_WORLD, ierr)
CALL MPI_RECV(val_neigh(2*id), 1, MPI_DOUBLE_PRECISION, idp(id), 666, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
END IF
IF (found_neigh(2*id - 1)) THEN
CALL MPI_RECV(val_neigh(2*id - 1), 1, MPI_DOUBLE_PRECISION, idm(id), 999, MPI_COMM_WORLD, MPI_STATUS_IGNORE, ierr)
CALL MPI_SEND(val, 1, MPI_DOUBLE_PRECISION, idm(id), 666, MPI_COMM_WORLD, ierr)
END IF
END DO
END IF
! combine own contribution with others
val = somefunc(val, val_neigh)
As you said, MPI_Neighbor_allreduce does not exist.
You can create derived communicators that only include your adjacent processes and then perform a regular MPI_Allreduce on them. Each process can have up to 7 communicators in a 3D grid.
The communicator in which a specific process will be placed in the center of the stencil.
The respective communicator for each of the adjacent processes.
This can be a quite expensive process, but it does not mean it could be worthwhile (HPLinpack makes extensive use of derived communicators, for example).
If you already have a cartesian topology, a good approach is to use MPI_Neighbor_allgather. This way you will not only know how many neighbors are involved but also who it is.
int found; // logical: either 1 or 0
int num_neighbors; // how many neighbors i got
int who_is_involved[num_neighbors]; // unknown, to be received
MPI_Neighbor_allgather( &found, ..., who_is_involved, ..., comm );
int actually_involved = 0;
int r = 0;
MPI_Request reqs[2*num_neighbors];
for( int i = 0; i < num_neighbors; i++ ) {
if( who_is_involved[i] != 0 ) {
actually_involved++;
MPI_Isend( &val, ..., reqs[r++]);
MPI_Irecv( &val, ..., reqs[r++]);
}
}
MPI_Waitall( r, reqs, MPI_STATUSES_IGNORE );
Note that I'm using non-blocking point to point routines. This is important in most cases because MPI_Send may wait for the receiver to call MPI_Recv. Unconditionally calling MPI_Send and then MPI_Recv in all processes, may cause a deadlock (see MPI 3.1 standard section 3.4).
Another possibility is to send both the real value and the found in a single communication, so that the number of transfers are reduced. Since all processes are involved in the MPI_Neighbor_allgather anyway, you could use it to get everything done (for a small increase in the amount of data transferred it really pays off).
INTEGER :: neighbor, num_neighbors, found
REAL :: val
REAL :: sendbuf(2)
REAL :: recvbuf(2,num_neighbors)
sendbuf(1) = found
sendbuf(2) = val
CALL MPI_Neighbor_allgather( sendbuf, 1, MPI_2REAL, recvbuf, num_neighbors, MPI_2REAL, ...)
DO neighbor = 1,num_neighbors
IF recvbuf(1,neighbor) .EQ. 1 THEN
! use neighbor val, placed in recvbuf(2,neighbor)
END IF
END DO

Functional programming design: A case about Memo in scalaz

I've been using Memo from scalaz for a while, however, here is the situation that I feel I am not able to remain pure.:
def compute(a: Int, b: Int): Int = {a+b} //an expensive computation
val cache = Memo.immutableHashMapMemo[(Int, Int), Int]{
case ((a,b)) => compute(a,b)
}
Now, I have s1 and s2 both in type Set[(Int, Int)]. For example, s1 = Set((1,1), (1,2)) and s2 = Set((1,2), (1,3)). Each list has to be run in parallel:
def computePar(s: Set[(Int, Int)]): Set[Int] = //using compute() in parallel
So the issue is each time I can only get the list of the results from an input list. While my Memo should still be Map[(Int, Int), Int] because you can reuse compute(1,2) from s1 for the first element in s2. Using a mutable map should solve the problem. I just wonder if there is a FP solution. I feel it might be related to Kleisli or similar.
Your question may be similar to this one: Pimping scalaz Memo
You will find solution to your problem there i think, but i have something more to say about TrieMap(structure proposed in linked question). It scales well with number of threads (horizontal scaling is very good), but it has quite a big constant cost of operations.

Using Fortran90 and MPI, new to both, trying to use MPI_Gather to collect from a loop 3 different variables in each process

I am new to both Fortran90 and MPI. I have a loop that iterates different based on each individual process. Inside of that, I have a nested loop, and it is here that I make the computations that I desire along with the elements of the respective loops. However, I want to send all of this data, the x, the y, and the computed values using x and y, to my root process, 0. From here, I want to write all of the data to the same file in the format of 'x y computation'.
program fortranMPI
use mpi
!GLOBAL VARIABLE DECLARATION
real :: step = 0.5, x, y, comput
integer :: count = 0, finalCount = 5, outFile = 20, i
!MPI
integer :: ierr, myrank, mysize, status(MPI_STATUS_SIZE)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,mysize,ierr)
if(myrank == 0) then
!I want to gather my data here?
end if
do i = 1, mysize, 1
if(myrank == i) then
x = -2. + (myrank - 1.)*step
do while (x<= 2.)
y= -2.
do while (y<=2.)
!Here is where I am trying to send my data!
y = y + step
end do
x = x + (mysize-1)*(step)
end do
end if
end do
call MPI_FINALIZE(ierr)
end program fortranMPI
I keep getting stuck trying to pass the data! If someone could help me out, that would be great! Sorry if this is simpler than I am making it, I am still trying to figure Fortran/MPI out. Thanks in advance!
First of all the program seems to not have any sense on what its doing. If you can be more specific on what you want to do, i can help further.
Now, usually, before the calculations, this if (myrank==0) statement, is where you send your data on the rest of the processes. Since process 0 will be sending data, you have to add code right after that in order for the processes to receive the data. Also you may need to add an MPI_BARRIER (call MPI_BARRIER), right before the start of the calculations, just to make sure the data has reached every process.
As for the calculation part, you also have to decide, not only where you send data, but also where the data is received and if you also need any synchronization on the communication. This has to do with the design of your program so you are the one who knows what exactly you want to do.
The most common commands for sending and receiving data are MPI_SEND and MPI_RECV.
Those are blocking commands, which means that the communication should be synchronized. One Send command should be matched with one Receive command before both processes can continue.
There are also non blocking commands, you can find them all here:
http://www.mpich.org/static/docs/v3.1/www3/
As for the MPI_GATHER command, this is used in order to gather data from a group of processes. This will only help you when you are going to use more than 2 processes to further accelerate your program. Except from that MPI_GATHER is used when you want to gather data and store them in an array fashion, and of course it's worth using only when you are going to receive lots of data which is definitely not your case here.
Finally about printing out results, i'm not sure if what you are asking is possible. Trying to open the same file handle using 2 processes, is probably going to lead to OS errors. Usually for printing out the results, you have rank 0 to do that, right after every other process has finished.

Resources