I am creating a non-deterministic orchestration to handle convoys. I know I am setting up for the classic 'zombie' pattern. What have you done to handle the zombies when this sort of pattern is necessary?
First point is make sure that it really necessary, can you avoid it? Read this first:
http://www.modhul.com/2008/06/08/yet-another-non-deterministic-biztalk-zombie-pattern/
Only thing I can think of, is that you have a routine for monitoring and cleaning up suspended instances.
Related
I am writing a PinTool, which can manipulate certain register/memory value. However, after manipulation, one challenge I am facing now, is the deadloop.
In particular, due to the frequent manipulation of certain register value, it is indeed common to create deadloop in the execution trace. I am thinking to detect such case, and terminate the execution.
So here is my question, what is a good practice to detect a deadloop in a PinTool? I can come up with some naive solutions, say, record the executed instructions, and if certain instruction has been executed for a large amount of times, just terminate the execution.
Could anyone help me on this issue? Thank you.
Detecting whether a program will terminate isn't a computable problem in general, so no, I don't think it's a good idea.
Well, it is obvious, let's say we have two processes A and F. F wants to fork A when it has the CPU control (and A is suspended since CPU is on F).
I have Googled however nothing related showed up. Is such a thing possible in Unix environments?
There is definitely no standard and/or portable way to clone a process from the outside but depending on the OS, there are certainly possible ways to divert a process from its task and force it to clone itself or do whatever you want.
I don't think it's a good idea in any way, but it may be possible for process F to attach to A using a debugger interface such as ptrace. Doing something like suspending the target process, saving its state, diverting the process to run fork, then restoring its original state.
It should be noted that your cloning process will probably need to handle some odd cases around threads and the like.
No it's not possible.
The fork() system call makes a copy of the parent, so if you call fork() in the F process, the child will be a copy of F, there's nothing you can do to change this behavior.
The reason this is not possible is because, normally with fork(), there is exactly one difference to start out with between the two processes: the return value of the fork() call itself. Without such a call inside the code of A, there is no way for the processes to have any difference between them, so they would both be doing exactly the same thing, when normally with you want one of the processes to start doing something different.
How exactly do you think what you want to do should work?
No, this would be a huge security hole that would result in the leakage of sensitive information if it were possible.
At best, you could setup a signal handler in the parent process that would fork(2) off a child (maybe exec(2) a pre-configured child process?).
I think you would be better served by looking in to message passing between two processes that have CPU affinity setup, but even then, I think the gains would be nominal (over-optimizing a problem?).
http://www.freebsd.org/cgi/man.cgi?query=cpuset&apropos=0&sektion=0
What is an example of when a deadlock is beneficial?
If the program you're deadlocking is a virus?
If you want to freeze up a process, I suppose that would be the only time you should do it... lol.
It's beneficial in that it clearly demonstrates you that your code is buggy and your synchronization methods need to be revised.
here is an example of exploiting a db deadlock in mysql.
It's more of a hack than a generalizable benefit of deadlocks, but it's the only thing I've ever come across that involves creating a deadlock for a beneficial effect other than for training purposes and for testing automated detection methods (which some may argue are both beneficial but where the benefit comes from helping avoid future deadlocks, so they are beneficial in the same sense as it's beneficial to study a deadly disease in a lab).
A deadlock is never beneficial. It is a huge problem in a program, because it causes the program to freeze under given circumstances!
A deadlock is never beneficial. It occurs when one or more processes are blocked forever because of requirements that cannot be satisfied. This will usually cause the program to appear to freeze as the processes will not continue unless the deadlock is broken. Programs must be crafted specifically to avoid deadlocks in all cases.
Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind.
I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted).
The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues).
However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly.
Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have any of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-)
Any and ALL input would be highly appreciated and seriously considered…
Thanks again.
For anyone interested. I decided in
the end to simply cache the
transaction in another location and
use the MSMQ as intended and described
below.
If the queue has a large-ish number of messages on it, then enumerating those messages will become a serious bottleneck. MSMQ was designed for first-in-first-out kind of access and anything that doesn't follow that pattern can cause you lots of grief in terms of performance.
The answer depends greatly on the sort of queries you're going to be executing, but the answer may be some kind of no-sql database (CouchDB or BerkeleyDB, etc)
I'm building the standard 3-tier ASP.NET web application but I'm struggling as to where to do certain things - specifically handling exceptions.
I've tried to have a look around on the web for some examples but can't find any which go as far as a whole project showing how everything links together.
In my data-tier I'm connecting to SQL Server and doing some stuff. I know I need to catch exceptions that could be raised as a result but I'm not sure where to do it.
From what I've read I should be doing it in the UI tier but in that case I'm not sure how to ensure that the connection to the database is closed. Is anyone able to clarify how to do this? Also if anyone knows as to where I could find an example 3-tier web application that follows best practices that would be great too.
thanks
There are no easy answers or patterns that will ensure success. Just like your validation strategy, your exact exception-handling strategy is specific to your exact situation, and is often a trade-off between time and comprehensiveness. There is some good advice we can give though:
Don't ever hide the stack-trace; don't ever use "Rethrow" unless for security purposes you want to hide what happened.
Don't feel you need error handling everywhere. By default, in your lower tiers, letting the actual error percolate up to the top tier is not bad. The UI/Controller is where you have to really decide how to react to something going wrong.
At every point, as yourself what exactly you want to happen if something goes wrong. Often, you won't be able to think of anything better than to just let it go up to the top layer or even to the client machine. (though in production turn of verbose reporting.) If this is the case, just let it go.
Make sure you dispose of unmanaged resources (Anything that implements IDisposable.) Your data access is a great example. Either (A) Call .Dispose() for your (especially) connection, command, datareader etc. in the Finally block, or (B) use the Using Syntax/Pattern which makes sure that proper Disposing happens.
Look for places where errors are likely and where you can look for certain errors, react (by retrying, waiting a second retrying, trying that action a different way, etc.) and then hopefully succeed. Much of your exception handling is to make success happen, not just to report failures well.
In data layer, you must consider what to do if something goes wrong in the middle of a multi-step process. You can let the actual error percolate up, but this layer must handle tidying things up after an error. You'll sometimes want to use transactions.
In an asynchronous situation (either (A.) because of multiple threads or (B.) because business logic is processes separately on "task machines" and such and acted upon later), you in particular need to have a plan for logging errors.
I'd rather see "error handling code" in 25% of your app than 100%. 100% means you probably wanted it to look and feel like you have error handling. 25% means you spent time handling exceptions where they really needed to be handled.
Just a side point that may steer your thinking: if you have any type of volume that may result in concurrency issues (deadlocks) you will want your application to detect that particular SQL error and retry the operation (e.g. transaction). That argues for some exception handling at either the data or business tiers.
-Krip
I believe its best practice to handle exceptions at the last responsible moment. This usually means at the UI level (i.e., the Controller in a MVC app or in the codebehind in a traditional asp.net app). Its at this high level that your code "knows" what the user is asking, and what needs to be done if something doesn't work.
Handling exceptions lower down in the call stack usually results in calling code not being able to handle exceptional situations properly.
In your data tier, you would use the standard patterns (e.g., the using statement for IDisposables such as the SqlConnection), document exceptions you know may occur (don't do this for out of memory or other rare cases), and then let them flow up the call stack when they do. At the most you might want to catch those exceptions and wrap them in a single exception type, in situations where MANY exceptions may have to be handled by callers.
If you need to "clean up" stuff before letting exceptions go, you can always use a finally block to clean up. You don't have to catch in order to use a finally block.
I don't know of any examples of small websites which are designed to highlight proper exception handling, sorry.
This is not a specific answer to your question but if you are interested in best practices I would look at Microsoft Patterns and Practices:
Application Architecture Guide
Web Applications Guides