MS Project single resource assigned to multiple concurrent activities - ms-project

In Microsoft Project (MSP), is it possible to have two parallel/concurrent activities with the same resource assigned to both? For example:
The worker resource "matt" is assigned to both activities, hence the overallocation error showing in the info column (red human icon).
Why do I require such a structure you may ask? In reality, one task would continually be temporarily interrupted to do the other one, and vice versa. But in MSP, it would be counter-productive to do so. Hence, my simplified approach - parallel activities.
So how should I fix this issue?
Ignore it?
Allocate 50% of the resource to each (and fix up finish dates which are automatically extended by MSP?)
Or is there a better solution?

if they indeed overlap then the best way to put this in your project is to allocate 50% on each task. But why fix up finish date, the program will recalculate the new duration of the task based on the new work, while the allocation is lowered to 50% then the duration should double ... Or you can declare the task as duration driven if that is the case, for such tasks duration will stay as set though allocation is changed

I do not think there is a better solution. So solution number 2 that is adjusting assignment units is the best in your case.

Related

Speed up 'if' conditions (verify / waiton)

(Disclaimer: I'm quite new to Tosca but not to testing in general)
DESCRIPTION
I'm automating a very dynamic page in Tosca, with content that is added (or not) as you progress through a form.
I do not have a testsheet at hand and do not have time to create one so I cannot use a template and the 'conditions' (I'm using TC-Parameters and they do not seem to apply to the 'Condition' column).
I wanted to use libraries as much as possible because most of the steps are the same, and there are a LOT of possible outcomes (I have more than 100 TCs to automate) so I'm trying to have my steps as 'generic' as possible, so that if the interface is changed in the future I'll be ale to maintain most of it 'centrally'.
PROBLEM
I've added four 'ifs' in strategic points. The problem is that an unsuccessful 'if' seems to hang for 10s, no matter what I use inside: 'verify' takes 10s & 'waiton' also takes 10s (although for the latter I modified the settings to 5s so I don't understand why).
I'm actually not interested in 'verify' waiting at all. I know that the content has either to be there or not at the precise moment where I have my condition. I'd be happy with a 1s delay, that'd be more than enough time for the app to display the content.
The TCs duration varies between 1m and 1m40s (4*10s if my 4 'if' are negative). It'd be great if I could speed it up, especially because most of the 'ifs' will NOT trigger. Any ideas?
You could try checking some settings regarding how long Tosca waits for Verify/Waiton:
Settings > Engine > Waiton > Maximum Wait Duration
Settings > TBox > Synchronization > Synchronization Time out
However, I've also found using buffered values to be more efficient time-wise for some of my testing scenarios.
I solved it by adding TCPs and buffering them, in order to check them as conditions (not sure why Tosca requires the intermediate buffering step but that does the trick). Now I can configure whether my tests should expect a given follow-up item or not. It's a lot of extra configs but at least my tests are nicely modular.

Design of JSR352 batch job: Is several steps a better design than one large batchlet?

My JSR352 batch job needs to read from a database, and then depending on the result flows to one of two pathways, each of which involves some more if/else scenarios. I wonder what the pros and cons between writing a single step with a large batchlet and several steps consisting of smaller batchlets would be. This job does not involves chunk steps with chunk size larger than 1, as it needs to persists the read result immediately in case there is any before proceeding to other logic. The job will be run using Control-M, I wonder if using multiple smaller steps provides more control points.
From that description, I'd suggest these
Benefits of more, fine-grained steps
1. Restart
After a job failure, the default behavior on restart is to begin executing at the step where the previous job execution failed. So breaking the job up into more steps allows you to avoid writing the logic to resume where you left off and avoid re-processing, and may save execution time in the process.
2. Reuse
By encapsulating a discrete function as its own batchlet, you can potentially compose other steps in other jobs (or even later in this job) implemented with this same batchlet.
3. Extract logic into XML
By moving the transition logic into the transition elements, and extracting the conditional flow (e.g. <next on="RC1" to="step3"/>, etc.)
into the job definition XML (JSL), you can introduce changes at a standard control point, without having to go into the Java source and find the right place.
Final Thoughts
You'll have to decide if those benefits are worth it for your case.
One more thought
I wouldn't automatically rule out the chunk step just because you are using a 1-item chunk, if you can still find benefits from the checkpointing or even possibly the skip/retry. (But that's probably a separate question.)

Preventing Deadlocks

for a pseudo function like
void transaction(Account from, Account to, double amount){
Semaphore lock1, lock2;
lock1 = getLock(from);
lock2 = getLock(to)
wait(lock1);
wait(lock2);
withdraw(from, amount);
deposit(to, amount);
signal(lock2);
signal(lock1);
}
deadlock happens if you run transaction(A,B,50) transaction(B,A,10)
how can this be prevented?
would this work?
A simple deadlock prevention strategy when handling locks is to have strict order on the locks in the application and always grab the locks according to this order. Assuming all accounts have a number, you could change your logic to always grab the lock for the account with the lowest account number first. Then grab the lock for the one with the highest number.
Another strategy for preventing deadlocks is to reduce the number of locks. In this case it might be better to have one lock that locks all accounts. It would definitely make the lock structure far more simple. If the application shows performance problems under heavy load and profiling shows that lock congestion is the problem - then it is time to invent a more fine grained locking strategy.
By making the entire transaction a critical section? That's only one possible solution, at least.
I have a feeling this is homework of some sort, because it's very similar to the dining philosophers problem based on the example code you give. (Multiple solutions to the problem are available at the link provided, just so you know. Check them out if you want a better understanding of the concepts.)

How do I determine whether a deadlock will occur in this system?

N processes share M resource units that can be reserved and release only one at a time. The maximum need of each process does not exceed M, and the sum of all maximum needs is less than M+N. Can a deadlock occur in the system ?
I hope you got the answer. Answering this question for other visitors.
The answer is that the deadlock will not occur in the system.
The proof is given in the image below.
The image was taken from http://alumni.cs.ucr.edu/~choua/school/cs153/Solution%20Manual.pdf on page 31
the system you are describing looks like semaphores
about your last question : YES. You "could" always do a deadlock ; if you don't see how, ask a young/shameful/motivated/deviant developer.
One good way to make a good one ; is to have strange locking/releasing resources rules. For example, if a process needs M resources to perform a task, he could locks half of them right away, and then waits for the other half to be available before doing anything.
I assume he never gives up until he have its M precious resources and releases them all once the task done.
A single process wouldn't cause much problems but several will as they will lock more than M total resources and will need more of them to get out this frozen state.

Can you sacrifice performance to get concurrency in Sqlite on a NFS?

I need to write a client/server app stored on a network file system. I am quite aware that this is a no-no, but was wondering if I could sacrifice performance (Hermes: "And this time I mean really slash.") to prevent data corruption.
I'm thinking something along the lines of:
Create a separate file in the system everytime a write is called (I'm willing do it for every connection if necessary)
Store the file name as the current millisecond timestamp
Check to see if the file with that time or earlier exists
If the same one exists wait a random time between 0 to 10 ms, and try again.
While file is the earliest timestamp, do work, delete file lock, otherwise wait 10ms and try again.
If a file persists for more than a minute, log as an error, stop until it is determined that the data is not corrupted by a person.
The problem I see is trying to maintain the previous state if something locks up. Or choosing to ignore it, if the state change was actually successful.
Is there a better way of doing this, that doesn't involve not doing it this way? Or has anyone written one of these with a lot less problems than the Sqlite FAQ warns about? Will these mitigations even factor in to preventing data corruption?
A couple of notes:
This must exist on an NSF, the why is not important because it is not my decision to make (it doesn't look like I was clear enough on that point).
The number of readers/writers on the system will be between 5 and 10 all reading and writing at the same time, but rarely on the same record.
There will only be clients and a shared memory space, there is no way to put a server on there, or use a server based RDMS, if there was, obviously I would do it in a New York minute.
The amount of data will initially start off at about 70 MB (plain text, uncompressed), it will grown continuous from there at a reasonable, but not tremendous rate.
I will accept an answer of "No, you can't gain reasonably guaranteed concurrency on an NFS by sacrificing performance" if it contains a detailed and reasonable explanation of why.
Yes, there is a better way. Don't use NFS to do this.
If you are willing to create a new file every time something changes, I expect that you have a small amount of data and/or very infrequent changes. If the data is small, why use SQLite at all? Why not just have files with node names and timestamps?
I think it would help if you described the real problem you are trying to solve a bit more. For example if you have many readers and one writer, there are other approaches.
What do you mean by "concurrency"? Do you actually mean "multiple readers/multiple writers", or can you get by with "multiple readers/one writer with limited latency"?

Resources