EventStore with NServiceBus causes deadlocks when more than 1 thread is used - neventstore

I'm currently working on a project that uses EventStore, CommonDomain, and NServiceBus, when I have NumberOfWorkerThreads set to 1, all of our services(nservicebus - we have 6 of them, each has thier own event store) runs perfectly, but when i set NumberOfWorkerThreads to more than one, I start seeing a ton of deadlocks, i mean like at least 50 per minute. All of the deadlocks are on the Commits table. From what i've found is that, it looks like I'm updating the same aggregate in multiple threads, which could easily happen during and import of a catalog per say, and I update quantity in one thread, while updating the price in another thread, so both threads are trying to update the same aggregate.
Has anyone else had this issue, and how have you gotten around it?

Related

Need to run multiple copies of ASP.net website with datagridview sharing single url C#

Have drop-down menu which fills 4 datagridviews based on the branch selected or when the start button is pressed loops through 80 branches.
4 sql server procs, 1 per datagridview, unique sql table, read access, only.
Need to access multiple copies, single url.
Database retrieval time = # of copies run (single asp.net websites over single url called multiple times) * database runtime.
So if it takes 30 seconds for data retrieval, running 3 copies takes 90 seconds and seems to fragment the data or timeout..
I'm using nolocks so there isn't deadlock.
But I need to optimize this.
Should I create one web service and will this solve the problem of hitting the database only one time instead of 1x per single url hit.
Thank you.
David
Thank you, the timer was taking over and performing differently on the server than on my local. Also the UI, timer, and Database were out of synch. So adding a thread.sleep helped. Adding a longer interval on the timer, helped. Also putting all the database calls together, instead of 1 connection per database call helped. Now it runs all the same time.
The main takeaway I think is that the timer and the Thread.Sleep was the main thing.
I also had a UI button - which I added some code so that once it's pushed, if you keep pushing it, it doesn't do anything.
Thank you to everyone that posted answer..
Well, this will come down to not really the numbers of records pulled, but that if you are executing multiple SQL statements over and over.
I mean, to fill 4 gv's with 4 queries? That's going to be quite much instant assuming the record set size for each grid is say in the 100 row range. Such a button click and filling the grids should be very low time.
And even if you using a row databound event - once again, it will run fast. But ONLY if you not executing a whole bunch of additional SQL queries. So the number of "hits" or so called SQL statements sent to the database is what for the most part determine the speed of this setup.
So say you have one grid - pulls 100 rows. But then the next grid say needs data based on 100 rows of "new" SQL queries. In that case, what you can often do is fill a reocrdset with the child data - and filter against that recordset, and thus say not have to execute 100 SQL queries, but only 1 query.
So, this will really come down to how many separate SQL queries you execute in total.
To fill 4 grids with 4 queries? I don't see that as being a problem, and thus we are no doubt missing some big "whopper" of a detail you not shared with us.
Expand in your question how many SQL statements are generated in total - that's the bottle neck here. Reduce that, and your performance should be just fine.
And if the 4 simple stored procedures have "cursors" that loop and again generate many SQL commands - get rid of that.
4 basic SQL queries pulls is nothing - something else is at work that you not sharing. Why would each single stored procedure take so very long? That's the detail we are missing here.

Is there a way to define a trigger that runs reliably at a datetime specified as a field in the updated/created object?

The question
Is it possible (and if so, how) to make it so when an object's field x (that contains a timestamp) is created/updated a specific trigger will be called at the time specified in x (probably calling a serverless function)?
My Specific context
In my specific instance the object can be seen as a task. I want to make it so when the task is created a serverless function tries to complete the task and if it doesn't succeed it updates the record with the partial results and specifies in a field x when the next attempt should happen.
The attempts should not span at a fixed interval. For example, a task may require 10 successive attempts at approximately every 30 seconds, but then it may need to wait 8 hours.
There currently is no way to (re)trigger a Cloud Function on a node after a certain timespan.
The closest you can get is by regularly scheduling a cron job to run on the list of tasks. For more on that, see this sample in the function-samples repo, this blog post by Abe, and this video where Jen explains them.
I admit I never like using this cron-job approach, since you have to query the list to find the items to process. A while ago, I wrote a more efficient solution that runs a priority queue in a node process. My code was a bit messy, so I'm not quite ready to share it, but it wasn't a lot (<100 lines). So if the cron-trigger approach doesn't work for you, I recommend investigating that direction.

AX 2012R2: Lookup query takes too long, lookup never opens

I have a AX2012R2 CU6 (build&client 6.2.1000.1437, kernel 6.2.1000.5268) with the following problem:
On AP>Journals>Invoices>Invoice Journal>lines (form LedgerJournalTransVendInvoice), when I select Vendor as Account type and then activate the lookup on the Account field, AX freezes for a couple minutes and when it recovers, the lookup is closed/never opened. This happens every time when account type vendor, other account types work just fine.
I debugged this to LedgerJournalEngine.accountNumLookup() --> VendTable.lookupVendor line
formSegmentedEntryControl.performFormLookup(formRun);
The above process takes up the time.
Any ideas before I hire an exorcist?
There is a known KB for this for R3, look for it on Lifecycle services
KB 3086961 Performance issue of VendorLookup on the volume data,
during the GFM Bugbash 6/11 took over 30 minutes
Even though the fix is for R3 it should be easy to backport as the changes are described as
The root cause seemed to be the DirPartyLookupGridView, which had
around 14 joins on views and tables. This view is used in many places
and hence seemed to have grown quite a lot over time.
The changes in the hotfix remove the view and add only the required
datasources - dirpartytable and logisticsaddress to the
VendTableLookup form.
The custtableLookup is not using the view and using custom datasource
joins instead, so no changes there.
Try implementing that change and see what happens.
I'm not sure this will fix your issue as in your execution plan the only operation that seems really expensive is the sort operator which needs to spill to tempdb (you might need more memory to solve that) but the changes in the datasource could have the effect of removing the sort operator from the execution plan as the data may be sorted by an index.
Probably the SQL Server chose the wrong query plan.
First check that you have not disabled any indexes on the involved tables, then do a synchronize on them.
If still a problem, then to run a STATISTICS UPDATE on the involved tables (including the tables in the view).

Update aggregate sum in one to many relationship when child persisted

My pet learning project is a simple Project Management / Task app that has the following entity relationships:
Project has many Tasks
Task has many Activities
Activity is basically a time log with a startedAt, endedAt and a durationInSeconds.
Projects and Tasks has a column called durationInSecondsSum which needs to be updated every time an Activity is persisted, updated, deleted.
durationInSecondsSum is an aggregate field for the sum of Activity::durationInSeconds.
My question is… What is the best way to update the durationInSecondsSum property in both Project and Task.
Possible options I've considered but haven't fully wrapped my head around:
Aggregate Root.
As described here: http://doctrine-orm.readthedocs.org/en/latest/cookbook/aggregate-fields.html.
Given an Activity::durationInSeconds can be edited at anytime and that the changes need to bubble from Activity to Task and Project I'm not sure how to implement this up the chain.
Doctrine events
I have implemented a postFlush event before to calculate the sum in a previous project. After reading more about events it seems that this is a bad implementation that can lead to double entities.
I also considered the onFlush which might work better.
The problem I see here is that I wouldn't be able to calculate the sum directly from the DB given the Activity hasn't been saved yet.
Message Queues
Triggering a message to update the Task and Project seems like overkill.
Duration Service
A separate service which takes a collection of Tasks / Projects and calculates their durations on the fly without ever saving that data to the db.
Any guidance / experience / war stories would be very much appreciated.

Flex: Calculate hours between 2 times?

I am building a scheduling system. The current system is just using excel, and they type in times like 9:3-5 (meaning 9:30am-5pm). I haven't set up the format for how these times are going to be stored yet, I think I may have to use military time in order to be able to calculate the hours, but I would like to avoid that if possible. But basically I need to be able to figure out how to calculate the hours. for example 9:3-5 would be (7.5 hours). I am open to different ways of storing the times as well. I just need to be able to display it in an easy way for the user to understand and be able to calculate how many hours it is.
Any ideas?
Thanks!!
Quick dirty ugly solution
public static const millisecondsPerHour:int = 1000 * 60 * 60;
private function getHoursDifference(minDate:Date, maxDate:Date):uint {
return Math.ceil(( maxDate.getTime() - minDate.getTime()) / millisecondsPerHour);
}
Ok it sounds like you're talking about changing from a schedule or plan that's currently developed by a person using an excel spreadsheet and want to "computerize" the process. 1st Warning: "Scheduling is not trivial." How you store the time isn't all that important, but it is common to establish some level of granularity and convert the task time to integer multiples of this interval to simplify the scheduling task.
If you want to automate the process or simply error check you'll want to abstract things a bit. A basic weekly calendar with start and stop times, and perhaps shift info will be needed. An exception calendar would be a good idea to plan from the start. The exception calendar allows holidays and other exceptions. A table containing resource and capacity info will be needed. A table containing all the tasks to schedule and any dependencies between tasks will be needed. Do you want to consider concurrent requirements? (I need a truck and a driver...) Do you want to consider intermittent scheduling of resources? Should you support forward or backward scheduling? Do you plan to support what if scenarios? (Then you'll want a master schedule that's independent of the planning schedule(s)) Do you want to prioritize how tasks are placed on the schedule? (A lot of though is needed here depending on the work to be done.) You may very well want to identify a subset of the tasks to actually schedule. Then simply provide a reporting mechanism to show if the remaining work can fit into the white space in the schedule. (If you can't get the most demanding 10% done in the time available who cares about the other 90%)
2nd Warning: "If God wrote the schedule most companies couldn't follow it."

Resources