I set up a model with UPPAAL and i used the verifier to check for a deadlock. The answer is: Property not satisfied. Thus there exists a deadlock.
Is there a way in UPPAAL to report more detailled information about the deadlock such as the state and the current values of all variables in the specific situation?
yes. we can trace deadlock in UPPAAL i-e we can find the states or path that is causing deadlock.
Go to option--> diagnostic trace --> fastest. you can select any one of these option some/fastest/shortest in diagnostic trace. After selecting fastest. Go to verifier and check deadlock lock property. Store new trace in simulation by selecting "yes" after this go to simulator, it will show you the new store trace that is making property unsatisfiable.
Hope it will be helpfull
Related
I have been charged with improving the performance of an ASPX website. I'm enabling Tracing so I can look at performance times, but I don't understand what is meant by First(s) and Last(s). What do these mean?
Also, it appears I can add custom events, but I don't know what the times mean on those either.
When tracing is enabled, it will monitor each of the requests that are made by your application to allow you to go through and troubleshoot possible issues as well as gauge things like performance. When you are looking at a specific tracing record, the properties that you mentioned would have some type of value (e.g. the From First would indicate how long after your first trace message that the current trace was processed, and From Last would indicate how many seconds elapsed since the previous trace).
Does that make sense? I'll try to demonstrate what these mean in a very primitive diagram below :
First Trace -------------> Another Trace -------------> Current Trace
----------------(From First)-------------->
-(From Last)->
Get More info from https://learn.microsoft.com/en-us/previous-versions/aspnet/bb386420(v=vs.100)
I see that there was some discussion on this subject before[*], but I cannot see any way how to access this. Is it possible yet?
Reasoning: we have for example ContainerStoppingErrorHandler. It would be vital, if container was automatically stopped, to know, where it stopped. I guess ContainerStoppingErrorHandler could be improved to log partition/offset, as it has access to Consumer (I hope I'm not mistaken). But if I'm stopping/pausing it manually via MessageListenerContainer, I cannot see a way how to get info about last processed offset. Is there a way?
[*] https://github.com/spring-projects/spring-kafka/issues/431
I am using parallel shapes in the BizTalk orchestration. There are four parallel branches in the shape and in each branch I am using a scope shape (Transaction Type = None) with subsequent catch block and the execution logic is placed in the scope shape.
This parallel Shape is also contained in a scope (Transaction Type = None ) in the orchestration with corresponding catch block.
Now what is the supposed behaviour if the execution in one of the branch fails? As per me if execution of one branch fails, the execution of other branch should have been taken place.
But in my orchestration if one branch execution fails the other branch execution is not started even. It seems like that other branch starts executes after the previous branch code is executed successfully.
Please tell me what can be the possible source of this behaviour?
According to MSDN, the Parallel shape will have all its branches run independently See MSDN: http://msdn.microsoft.com/en-us/library/ee253584(v=bts.10).aspx
However, this is from a business process perspective, not from a technical one. If one of your branches fails, it is perfectly possible that other branches will not be executed. As far as I know, you don't have any control over the order of execution (not sure about that one though).
See this small blog post for more information: http://blogs.msdn.com/b/pkelcey/archive/2006/08/22/705171.aspx
An aggregator pattern might be a good idea here, depending on your specific situation. It would give you full control over the situation.
Basically, if one of the branches fail, then all of the branches fail. The key point to remember is:
All branches come together at the end of the Parallel Actions shape, and processing does not continue until all have completed.
So, if one of then branches fails, then they will never converge. If an exception is thrown on one branch, the catch block will catch it and all the other branches will cease to process any incoming messages. It's my understanding that Parallel branches are mostly used in message correlation for situations where you need to wait for more than one message to arrive in order to be able to proceed. Order of branch execution is determined by order of the messages received that each branch is expecting.
My client-side sensu metric is reporting a WARN and the data is not getting to my OpenTSDB.
It seems to be stuck, but I don't understand what the message is telling me. Can someone translate?
The command is a ruby script.
In /var/log/sensu/sensu-client.log :
{"timestamp":"2014-09-11T16:06:51.928219-0400",
"level":"warn",
"message":"previous check command execution in progress",
"check":{"handler":"metric_store","type":"metric",
"standalone":true,"command":"...",
"output_type":"json","auto_tag_host":"yes",
"interval":60,"description":"description here",
"subscribers"["system"],
"name":"foo_metric","issued":1410466011,"executed":1410465882
}
}
My questions:
what does this message mean?
what causes this?
Does it really mean we are waiting for the same check to run? if so, how do we clear it?
This error means that sensu is (or thinks it is, actually executing this check currently
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L115
This can be caused by stacking checks, that take longer than their interval to run. (60 seconds in this case)
You can try to set the "timeout" option in the check definition:
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L101
To try to make sensu time out after a while on that check. You could also add internal logic to your check to make it not hang.
In my case, I had accidentally configured two sensu-client instances to have the same name. I think that caused one of them to always think its checks were already running when in reality they were not. Giving them unique names solved the problem for me.
What will occur when a row level lock is requested in a macro and the Optimizer determines a
table level lock is required? Will it give a warning while upgrading?
Sorry I don't know how to check it practically by creating such a scenario so I am just asking straight away.
The optimizer determines the neccessary lock when it's optimizing a query.
When you request a row hash lock and there must be a table lock to process the query there will be no warning.
Did you try to EXPLAIN EXEC mymacro;?