Critical section with sephamores - deadlock

Take this pseudo code,
Semaphore S <- 0
non-critical section
wait(S)
critical section
signal(S)
Does this solution to the critical section problem support mutual exclusion only?
I know that there is no freedom from deadlock since the critical section is never reached, however would that also mean that it does not support mutual exclusion.

Mutual Exclusion means that only 1 thread(or process) can enter the Critical Section. So by definition the code does not violate mutual exclusion.
The important thing to note here is that all threads, would be in busy-waiting state, and there is no progress at all, but actually there is no deadlock, because one of the 4 must occur conditions for deadlock is CIRCULAR WAIT. But its not that threads are waiting for each other in this case.

Related

What if follower loses its leader in a two-node situation (only a leader and a candidate) in Raft?

I'm learning Raft algorithm. My implementation meets following situation:
1-leader-1-follower situation is established;
shutdown the leader;
follower gets no heartbeat so then becomes a candidate;
candidate keeps sending VoteRequest to the peer (already shutdown) and fails;
election timeout without any leader elected;
candidate starts another candidate session, actually repeats 4-6 ...
I don't see how to solve this situation in Raft papers (maybe I missed something).
In my opinion I can check granted votes in step-5 before starting a new election. Since candidate votes for itself in the beginning of election session, so in this check, the candidate will become a new leader.
But I worry about this solution will break Raft, especially breaking the initial process when all nodes are candidates.
Another idea is treating the network error of RequestVote requests as "Vote Granted". (still worry about if it breaks something)
I know this situation could be caused by 'only 2 nodes'. However even if there are 3 nodes (so 1-leader-2-follower situation established), then if 2 leaders are shut down consequently, the remain follower may still behave like this.
What you are describing as a problem is actually a legit situation.
Raft will not work if the majority of nodes are not present, and there is no way to avoid this besides getting the majority of nodes back in function.

deadlock and mutual exclusion

Two processes X and Y need to access a critical section. Consider the following synchronization construct used by both the processes.
http://d18khu5s3lkxd9.cloudfront.net//wp-content/uploads/2015/02/Q20.png
In the link given above,
varP and varQ are shared variables and both are initialized to false. Which one of the following statements is true?
1.The proposed solution prevents deadlock but fails to guarantee mutual exclusion
2.The proposed solution guarantees mutual exclusion but fails to prevent deadlock
3.The proposed solution guarantees mutual exclusion and prevents deadlock
4.The proposed solution fails to prevent deadlock and fails to guarantee mutual exclusion
According to the question setter 4th answer is the correct answer.
I have figured that it fails to guarantee mutual exclusion but how does it fails to prevent deadlock?
I came up with this after studying the algo carefully.
Say process Y has used the Critical Section.Therefore,it must have set VarQ variable as false.
Now if Process X tries to enter Critical Section.It can never enter unless Process Y also tries to enter.Reason being the condition while(varQ == true) will remain false unless Process Y tries to enter Critical Section and in doing so sets VarQ to true which before leaving Critical Section(CS) it had set to false.
So as we can see if Process Y does not tries to enter CS,Process X is indefinitely blocked and also the Critical Section is lying unused.
But the question still remains that how is lack of starvation freedom leading to lack of deadlock freedom.In deadlock every process is blocked,but if Process Y indeed tried to enter CS again,Process X could have been successful in its attempt to enter CS.

Modeling an HTTP transition system in Alloy

I want to model an HTTP interaction, i.e. a sequence of HTTPRequest/HTTPResponse, and I am trying to model this as a transition system.
I defined an ordering on a class State by using:
open util/ordering[State]
where a State is simply a set of Messages:
sig State {
msgSet: set Message
}
Each pair of (HTTPRequest->HTTPResponse) and (HTTPResponse->HTTPRequest) is represented as a rule in my transition system.
The rules are expressed in Alloy as predicates that let one move from one state to another.
E.g., this is a rule generating an HTTPResponse after a particular HTTPRequest is received:
pred rsp1 [s, s': State] {
one msg: Request, msg':Response | (
// Preconditions (previous Request)
msg.method=get &&
msg.address.url=sample_com &&
// Postconditions (next Response)
msg'.status=OK_200 &&
// previous Request has to be in previous state
msg in s.msgSet &&
// Response generated is added to next state
s'.msgSet = s.msgSet + msg'
}
Unfortunately, the model created seems to be too complex: we have a dozen of rules (more complex than the one above but following the same pattern) and the execution is very slow.
EDIT: In particular, the CNF generation is extremely slow, while the solving takes a reasonable amount of time.
Do you have any suggestion on how to model a similar transition system?
Thank you very much!
This is a model with an impressive level of detail; thank you for sharing it!
None of the various forms of honestAction by itself takes more than two or three minutes to find an instance (or in some cases to fail to find any instance), except for rsp8, which takes quite a while by itself (it ran for fifteen minutes or so before I stopped it).
So the long CNF preparation times you are observing are apparently caused by either (a) just predicate rsp8 that's causing your time issues, or (b) the size of the disjunction in the honestAction predicate, or (c) both.
I suspect but have not proved that the time issue is caused by combinatorial explosion in the number of individuals required to populate a model and the number of constraints in the model.
My first instinct (it's not more than that) would be to cut back on the level of detail in the model, in particular the large number of singleton signatures which instantiate your abstract signatures. These seem (I could be wrong) to be present either for bookkeeping purposes (so you can identify which rule licenses the transition from one state to another), or because the modeler doesn't trust Alloy to generate concrete instances of signatures like UserName, Password, Code, etc.
As the model now is, it looks as if you're doing a lot of work to define all the individuals involved in a particular example, instead of defining constraints and letting Alloy do the work of finding examples. (Using Alloy to check the properties a particular concrete example can be useful, but there are other ways to do that.)
Since so many of the concrete signatures in the model are constrained to singleton cardinality, I don't actually know that defining them makes the task of finding models more complex; for all I know, it makes it simpler. But my instinct is to think that it would be more useful to know (as well as possibly easier for Alloy to establish) that state transitions have a particular property in general, no matter what hosts, users, and URIs are involved, than to know that property rsp1 applies in all the cases where the host is named examplecom and the address URI is example_url_https and whatnot.
I conjecture that reducing the number of individuals whose existence and properties are prescribed, and the constraints on which individuals can be involved in which state transitions, will reduce the CNF generation time.
If your long-term goal is to test long sequences of state transitions to test whether from a given starting point it's possible or impossible to arrive at a particular state (or kind of state), you may need to re-think the approach to enable shorter sequences of state transitions to do the job.
A second conjecture would involve less restructuring of the model. For reasons I don't think I understand fully, sometimes quantification with one seems to hurt rather than help performance, as in this example, where explicitly quantifying some variables with some instead of one turned out to make a problem tractable instead of intractable.
That question involves quantification in a predicate, not in the model overall, and the quantification with one wasn't intended in the first place, so it may not be relevant here. But we can test the effect of the one keyword on this model in a simple way: I commented out everything in honestAction except rsp8 and ran the predicate first != last in a scope of 8, once with most of the occurrences of one commented out and once with those keywords intact. With the one keywords commented out, the Analyser ran the problem in 24 seconds or so; with the one keywords in place, it ran for 500 seconds so far before I decided the point was made and terminated it.
So I'd try removing the keyword one from all of the signatures with instance-specific individuals, leaving it only on get, post, OK_200, etc., and appData. I would also try doing without the various subtypes of Key, SessionID, URL, Host, UserName, and Password, or at least constraining their cardinality in the run command.

How do I determine whether a deadlock will occur in this system?

N processes share M resource units that can be reserved and release only one at a time. The maximum need of each process does not exceed M, and the sum of all maximum needs is less than M+N. Can a deadlock occur in the system ?
I hope you got the answer. Answering this question for other visitors.
The answer is that the deadlock will not occur in the system.
The proof is given in the image below.
The image was taken from http://alumni.cs.ucr.edu/~choua/school/cs153/Solution%20Manual.pdf on page 31
the system you are describing looks like semaphores
about your last question : YES. You "could" always do a deadlock ; if you don't see how, ask a young/shameful/motivated/deviant developer.
One good way to make a good one ; is to have strange locking/releasing resources rules. For example, if a process needs M resources to perform a task, he could locks half of them right away, and then waits for the other half to be available before doing anything.
I assume he never gives up until he have its M precious resources and releases them all once the task done.
A single process wouldn't cause much problems but several will as they will lock more than M total resources and will need more of them to get out this frozen state.

UPPAAL: Invariants violated but none have been explicitly set - how to resolve deadlock?

I'd like to learn more about Timed Automata to verify real-time systems. Currently, I'm trying to get myself familiar with a tool called UPPAAL.
I drew some simple graphs and added different properties. The entire model is supposed to represent a production cell system where different mechanical units pass a block to each other.
I've modelled the block (BlockCycle), 2 mechanical units (Feeder, FeederBelt) and 2 sensors which sense the arrival of the block.
Even though I thought my system would make sense, it gets deadlocked:
The target states of this transition violate the invariants. This is not a problem, as long as you intended your model to behave like this.
(No I didn't. ;P)
I can't seem to find the reason for the deadlock, though. The tool points me to the BlockCycle model but I didn't specify any invariant there. In fact, all the transition requirements are fulfilled so the transition (from Pos7 to Pos8) should definitely be taken.
Below you'll see how the systems evolves and finally gets stuck (error message pops up).
Labels:
green : property check (e.g. FB_Running means FB_Running == true )
dark blue: property updates/assignments
dark red: locations (e.g. Pos7 or Pos8)
The BlockCycle model with the respective transition in question:
My Question: Why does the system deadlock even though there are still transaction which could be taken.
Edit1: When I remove the invariant property of Sensor7's location Active (called BlockAtPos[7]) the deadlock is resolved. So I guess, since there is no synchronization between Sensor7 and BlockCycle for the last transition before it deadlocks, that would cause to a contradiction (BlockAtPos[7] becoming false while the sensor has not yet the chance to take the InActive transition) and thus violating the invariant.
Note: You can find my UPPAAL code/file here: pcs.xml.

Resources