How can a string that does not belongs to the input language can set a turing machine in a infinite loop? - infinite-loop

How is it possible to set a turing machine in infinite loop by putting a string that doesn't belong to input language even if it has a reject state?

Consider the TM that does the following:
Read a tape cell. If 0, halt accept. If 1, write 1, move right, and enter state 2.
Read a tape cell. If 0, halt reject If 1, write 1, move left, and enter state 1.
This machine accepts strings 0* + 10*. It does not accept anything in 11* but it will loop forever on such a string.

Related

Whether the `odd-even check` only can check the data correctness within limits?

The length of DES arithmetic Secret Key is 64 bit, the valid secret key length is 56, the other 8 bit use as odd-even check.
I have a question, whether the odd-even check only can check the data correctness within limits?
such as the quote, I use below image to explain my thought:
the first line, the odd check value is 0, if I switch the row 2 and row 3, the odd check value still is the 0, so the odd-even check can not check this issue.
so the odd-even check can only check the count of 1 and 0, but can not decide the data whether is correct, right?
That is correct. The odd-even check catches a change in one bit, but swapping a 1 and a 0 involves changing two bits. There are methods that can catch a change of two bits, but they require more than one check bit.

Using MPI_Put with shared memory windows

I have created a shared memory window to communicate with intra node neighbors.
MPI_Win_allocate_shared(2*size*sizeof(double), 1, MPI_INFO_NULL,
shmcomm, &mem, &win);
double *anew=mem;
double *aold=mem+size;
and I would to use MPI RMA when a neighbor is within an inter-node location, so ideally I would like to do something like
MPI_Put(&aold[index], 1, Xtype, left, offset, 1, Xtype, win);
However this reports an error since win has been allocated in the shared memory environment so it does not understand neighbors outside its own node.
Is there a way to create a secondary window to be used with aold so I can do something like
MPI_Put(&aold[index], 1, Xtype, left, offset, 1, Xtype, OtherWin);

MPI - Deal with special case of zero sized subarray for MPI_TYPE_CREATE_SUBARRAY

I use MPI_TYPE_CREATE_SUBARRAY to create a type used to communicate portions of 3D arrays between neighboring processes in a Cartesian topology. Specifically, each process communicates with the two processes on the two sides along each of the three directions.
Referring for simplicity to a one-dimensional grid, there are two parameters nL and nR that define how many values each process has to receive from the left and send to the right, and how many each has to receive from the right and send to the left.
Unaware (or maybe just forgetful) of the fact that all elements of the array_of_subsizes array parameter of MPI_TYPE_CREATE_SUBARRAY must be positive, I wrote my code that can't deal with the case nR = 0 (or nL = 0, either can be).
(By the way, I see that MPI_TYPE_VECTOR does accept zero count and blocklength arguments and it's sad MPI_TYPE_CREATE_SUBARRAY can't.)
How would you suggest to face this problem? Do I really have to convert each call to MPI_TYPE_CREATE_SUBARRAY into multiple MPI_TYPE_VECTORs called in a chain?
The following code is minimal but not working (but it works in the larger program and I haven't had time to extract the minimum number of declarations and prints), still it should give a better look into what I'm talking about.
INTEGER :: ndims = 3, DBS, ierr, temp, sub3D
INTEGER, DIMENSION(ndims) :: aos, aoss
CALL MPI_TYPE_SIZE(MPI_DOUBLE_PRECISION, DBS, ierr)
! doesn't work if ANY(aoss == 0)
CALL MPI_TYPE_CREATE_SUBARRAY(ndims, aos, aoss, [0,0,0], MPI_ORDER_FORTRAN, MPI_DOUBLE_PRECISION, sub3D, ierr)
! does work if ANY(aoss == 0)
CALL MPI_TYPE_HVECTOR(aoss(2), aoss(1), DBS*aos(1), MPI_DOUBLE_PRECISION, temp, ierr)
CALL MPI_TYPE_HVECTOR(aoss(3), 1, DBS*PRODUCT(aos(1:2)), temp, sub3D, ierr)
At the end it wasn't hard to replace MPI_TYPE_CREATE_SUBARRAY with two MPI_TYPE_HVECTORs. Maybe this is the best solution, after all.
In this sense one side question comes naturally for me: why is MPI_TYPE_CREATE_SUBARRAY so limited? There are a lot of examples in the MPI standard of stuff which correctly falls back on "do nothing" (when a sender or receiver is MPI_PROC_NULL) or "there's nothing in this" (when aoss has a zero dimension in my example). Should I post a feature request somewhere?
The MPI 3.1 standard (chapter 4.1 page 95) makes it crystal clear
For any dimension i, it is erroneous to specify array_of_subsizes[i] < 1 [...].
You are free to send your comment to the appropriate Mailing List.

Implement enumerator using turing machine - redundant prints

In the following algorithm:
we implement an enumerator using a turing machine and the enumerator is supposed to output the language accepted by the turing machine. The accepted words from Σ* are printed multiple times (each iteration previously printed words will be printed again).
Why can't we just say - "for each word in Σ* run M on it. If it accepts then print, if rejects then move on to the next word". Then we won't print each word more than once.
Why the unnecessary prints?
The algorithm from the image is:
If a TM M recognizes a language A, we can construct the following enumerator for A. Assume s1, s2, s3, ... is a list of possible strings in Σ*.
E = “Ignore the input
1) Repeat the following for i = 1, 2, 3, ...
2) Run M for i steps on each input s1, s2, s3, . . . si.
3) If any computations accept, print out corresponding sj.”
If M accepts a particular string, it will appear on the list generated by E (in fact infinitely many times)
Thanks
As stated in the comments: the problem is that some computation might not terminate. So if you do them sequentially, the ones after the first non-terminating computation will never be executed.
The given algorithm uses the standard technique to work around this: dovetailing.
You can change step 3 to "If any computation accepts after i steps, then print" - then there are no unnecessary prints. But then you have to count the steps during each simulation, which means some extra work. The author choses an option that is simple to program, but not very efficient.

Using Fortran90 and MPI, new to both, trying to use MPI_Gather to collect from a loop 3 different variables in each process

I am new to both Fortran90 and MPI. I have a loop that iterates different based on each individual process. Inside of that, I have a nested loop, and it is here that I make the computations that I desire along with the elements of the respective loops. However, I want to send all of this data, the x, the y, and the computed values using x and y, to my root process, 0. From here, I want to write all of the data to the same file in the format of 'x y computation'.
program fortranMPI
use mpi
!GLOBAL VARIABLE DECLARATION
real :: step = 0.5, x, y, comput
integer :: count = 0, finalCount = 5, outFile = 20, i
!MPI
integer :: ierr, myrank, mysize, status(MPI_STATUS_SIZE)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,mysize,ierr)
if(myrank == 0) then
!I want to gather my data here?
end if
do i = 1, mysize, 1
if(myrank == i) then
x = -2. + (myrank - 1.)*step
do while (x<= 2.)
y= -2.
do while (y<=2.)
!Here is where I am trying to send my data!
y = y + step
end do
x = x + (mysize-1)*(step)
end do
end if
end do
call MPI_FINALIZE(ierr)
end program fortranMPI
I keep getting stuck trying to pass the data! If someone could help me out, that would be great! Sorry if this is simpler than I am making it, I am still trying to figure Fortran/MPI out. Thanks in advance!
First of all the program seems to not have any sense on what its doing. If you can be more specific on what you want to do, i can help further.
Now, usually, before the calculations, this if (myrank==0) statement, is where you send your data on the rest of the processes. Since process 0 will be sending data, you have to add code right after that in order for the processes to receive the data. Also you may need to add an MPI_BARRIER (call MPI_BARRIER), right before the start of the calculations, just to make sure the data has reached every process.
As for the calculation part, you also have to decide, not only where you send data, but also where the data is received and if you also need any synchronization on the communication. This has to do with the design of your program so you are the one who knows what exactly you want to do.
The most common commands for sending and receiving data are MPI_SEND and MPI_RECV.
Those are blocking commands, which means that the communication should be synchronized. One Send command should be matched with one Receive command before both processes can continue.
There are also non blocking commands, you can find them all here:
http://www.mpich.org/static/docs/v3.1/www3/
As for the MPI_GATHER command, this is used in order to gather data from a group of processes. This will only help you when you are going to use more than 2 processes to further accelerate your program. Except from that MPI_GATHER is used when you want to gather data and store them in an array fashion, and of course it's worth using only when you are going to receive lots of data which is definitely not your case here.
Finally about printing out results, i'm not sure if what you are asking is possible. Trying to open the same file handle using 2 processes, is probably going to lead to OS errors. Usually for printing out the results, you have rank 0 to do that, right after every other process has finished.

Resources