I have a BaseX database that it has about 500.000 event nodes and I use the following select:
for $b in //EventList/Event[Type = 'Measurement']
let $date as xs:dateTime := xs:dateTime($b/TimeStamp)
where $date ge xs:dateTime('"+startdate+"')
and $date le xs:dateTime('"+enddate+"')
return $b
The weird thing is that there is a big difference in the speed that the data are returned.
Sometimes I receive the data in 5 seconds and some times in 75 seconds for the exactly same request.
There is an external application that makes requests in a row.
I observe that when the application starts the data are returned fast.
As long as the application continues to make requests the data are returned slower.
Is there something with the connections that are not closed properly?
I use
final BaseXClient session = new BaseXClient("localhost", 1984, "..","..");
And in order to close the connection
session.close();
Related
I am running a query in for loop which does update on a document. I have set the time-limit 36000.
However the query is running for more than 50 hours. I have used 20 threads and all those threads are taken by above query (since it is for loop).
Even after killing each query form admin and even after restarting the ML server, the next 20 query comes into queue and again those query running and occupying all threads.
let $_ := xdmp:set-request-time-limit(36000)
for $each in local:some-update-function()
let $document:= xdmp:invoke-function(function() {
local:another-update-function()
}, $constants:UPDATE_AUTO_COMMIT)
return $document
So I have a basic
$.ajax( { url : 'MyController/MyAction',
method : 'POST',
async: true,
.
. } );
that can be called very frequently as it is event-driven. Like it may be called 50 times in 1 second if the user is being obnoxious. It updates values in the database.
My friend told me that it's possible that the updates may be sent to the database in the wrong order. Is this true? This is causing me major cognitive dissonance and I can't sleep tonight.
I should mention that these values being updates in the database are associated with a user. In particular, the data is like
data : { userId : '21EC2020-3AEA-4069-A2DD-08002B30309D',
answerId : '69',
val : 'd' }
where the only values changing in rapid succession are answerId and val.
Unfortunately, my understand is "YES. The order is not guaranteed".
You send http request 50 times in 1 second and server save it to DB.
When the network is good and server is strong, it is okay to be saved to DB in order.
But if http sever is busy or network is interrupted, it does not guarantee the data will still always ordered in real happened sequence. Ex, 1 or 2 data order will be exchanged in DB.
My suggestion is : if the order is very critical and the update traffic is heavy, you should add a happen time in http data and save it to DB.
When you select data from DB, you can order by the happen time and it will make sure it has the correct order as event happens to avoid the mis-order caused by server or network busy.
I have a simple Biztalk Application 2013-r2 that imports a file into a table, then executes a long running post import process (via stored procedures).
Symptoms: when importing 2 files
The import of the first file has no issues
Then, the post processing starts (slow as expected due to long running stored procedure)
Then if you drop a second file, the first file post processing dissapears and the second import takes place.
Then they start alternating back and forth (you can see the post processing field being populated as expected)
Both send ports are active, sometimes you see a third one dehydrated
Since there are no errors reported, this must be a setting or do I need to move the post processing out of the long running transaction?
Details:
Orchestration Transaction Type is long running
The time out for the post processing send port is 59 minutes
The post processing stored procedure invokes child stored procedures.
No errors are reported anywhere
Both send ports have ordered delivery checked
Post Processing Stored Procedures:
CREATE PROCEDURE [sync].[MPostProcessing]
#Code NVARCHAR(2)
AS
----
---- 2. Normalize Address
----
IF #Code = '99'
EXEC sync.AElBatch #Code = #Code
CREATE PROCEDURE [sync].[AElBatch ] #Code AS VARCHAR(2)
AS
DECLARE #ID AS INT
WHILE EXISTS ( SELECT ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0 )
BEGIN
SELECT TOP 1
#ID = ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0
EXEC sync.PParse #ID = #ID
UPDATE sync.[mtable]
SET PostProcessingDone = 1
WHERE Code = #Code
AND ID = #ID
END
And then the PPArse stored procedure does more (all working, no errors reported)
Image of Biztalk Server Administration Console
So this is too long for a comment but I'm not 100% sure of your problem still. Either way:
It seems like you likely have some issues with your SPs. Refactor them to use set based queries instead of while loops (or cursors if you have any). Forcing SQL Server to process each individual scalar variable as a separate call will prevent it from fully optimizing whatever it's doing in sync.PParse - pass a table variable to it or something if you need to so that it can parallelize it properly and stop holding things up so badly.
It's quite possible that sync.PParse has a bug in it that is reading data it shouldn't. These lines in particular from AElBatch are troubling:
SELECT TOP 1
#ID = ID
FROM sync.[mtable]
WHERE Code = #Code
AND PostProcessingDone = 0
You probably want to add a batch identifier in there of some sort so that PostProcessing#2 doesn't start picking up what was really meant for PostProcessing#1.
Double check what's going on with sp_who2, see if things are getting blocked. It's likely that something is going on there, even if no errors are surfacing properly.
In the end, if none of that works, you might have to make them into a single SP that BizTalk calls so that Ordered Delivery will keep both jobs in the same queue - rather than allowing File Load #2 to complete before post processing job #1 is done.
Background
I have an issue where roughly once a month the AIFQueueManager table is populated with ~150 records which relate to messages which had been sent to AX (where they "successfully failed"; i.e. errorred due to violation of business rules, but returned an exception as expected) over 6 months ago.
Question
What tables are involved in the AIF inbound message process / what order to events occur in? e.g. XML file is picked up and recorded in the AifDocumentLog, data's extracted and added to the AifQueueManager and AifGatewayQueue tables, records from here are then inserted in the AifMessageLog, etc.
Thanks in advance.
There are 4 main AIF classes, I will be talking about the inbound only, and focusing on the included file system adapter and flat XML files. I hope this makes things a little less hazy.
AIFGatewayReceiveService - Uses adapters/channels to read messages in from different sources, and dumps them in the AifGatewayQueue table
AIFInboundProcessingService - This processes the AifGatewayQueue table data and sends to the Ax[Document] classes
AIFOutboundProcessingService - This is the inverse of #2. It creates XMLs with relevent metadata
AIFGatewaySendService - This is the inverse of #1, where it uses adapters/channels to send messages out to different locations from the AifGatewayQueue
For #1
So #1 basically fills the AifGatewayQueue, which is just a queue of work. It loops through all of your channels and then finds the relevant adapter by ClassId. The adapters are classes that implement AifIntegrationAdapter and AifReceiveAdapter if you wanted to make your own custom one. When it loops over the different channels, it then loops over each "message" and tries to receive it into the queue.
If it can't process the file for some reason, it catches exceptions and throws them in the SysExceptionTable [Basic>Periodic>Application Integration Framework>Exceptions]. These messages are scraped from the infolog, and the messages are generated mostly from the receive adaptor, which would be AifFileSystemReceiveAdapter for my example.
For #2
So #2 is processing the inbound messages sitting in the queue (ready/inprocess). The AifRequestProcessor\processServiceRequest does the work.
From this method, it will call:
Various calls to Classes\AifMessageManager, which puts records in the AifMessageLog and the AifDocumentLog.
This key line: responseMessage = AifRequestProcessor::executeServiceOperation(message, endpointActionPolicy); which actually does the operation against the Ax[Document] classes by eventually getting to AifDispatcher::callServiceMethod(...)
It gets the return XML and packages that into an AifMessage called responseMessage and returns that where it may be logged. It also takes that return value, and if there is a response channel, it submits that back into the AifGatewayQueue
AifQueueManager is actually cleared and populated on the fly by calling AifQueueManager::createQueueManagerData();.
Consider,
A query is taking more than a minute to retrieve the data (Due to larger volume of data) from the database.
I know that, we can set "timeout" attribute in the select tag (For a single query alone) or "defaultStatementTimeout" attribute in settings tag (SqlMapConfig.xml - For all the query) to forcibly terminate the query in execution.
<select id='uniqueName' parameterClass='java.util.Map' resultClass = "java.lang.String" timeout="60">
or
<settings useStatementNamespaces="false" defaultStatementTimeout="60"/>
By doing the above configuration, IBatis will throw "User cancelled request" error and terminates the execution.
Do we have any other way to terminate the execution?
My Scenario is:
When user requests for 3 years data, it takes more than a minute to fetch data from the database.
In the meantime, when the user requests for 1 day's data or sends a "cancel" request, I have to forcibly terminate the previous execution (3 years data retrieval) because it is affecting performance even with limited number of users.
NOTE
I didn't used any of the setting above.
Please provide me solution for this. Thanks in advance.
You can set a resource limit by modifying the Profile associated with the database user.