MarkLogic query taking too long - xquery

I am running a query in for loop which does update on a document. I have set the time-limit 36000.
However the query is running for more than 50 hours. I have used 20 threads and all those threads are taken by above query (since it is for loop).
Even after killing each query form admin and even after restarting the ML server, the next 20 query comes into queue and again those query running and occupying all threads.
let $_ := xdmp:set-request-time-limit(36000)
for $each in local:some-update-function()
let $document:= xdmp:invoke-function(function() {
local:another-update-function()
}, $constants:UPDATE_AUTO_COMMIT)
return $document

Related

Why does Hangfire wait for 15s every few seconds when polling sql server for jobs?

I’ve inherited a system that uses Hangfire with sql server job storage. Usually when a job is scheduled to be run immediately we notice it takes a few seconds before it’s triggered.
Looking at SQL Profiler when running in my dev environment, the SQL run against Hangfire db looks like this -
exec sp_executesql N'delete top (1) JQ
output DELETED.Id, DELETED.JobId, DELETED.Queue
from [HangFire].JobQueue JQ with (readpast, updlock, rowlock, forceseek)
where Queue in (#queues1) and (FetchedAt is null or FetchedAt < DATEADD(second, #timeout, GETUTCDATE()))',N'#queues1 nvarchar(4000),#timeout float',#queues1=N'MYQUEUENAME_master',#timeout=-1800
-- Exactly the same SQL as above is executed about 6 times/second for about 3-4 seconds,
-- then nothing for about 2 seconds, then:
exec sp_getapplock #Resource=N'HangFire:recurring-jobs:lock',#DbPrincipal=N'public',#LockMode=N'Exclusive',#LockOwner=N'Session',#LockTimeout=5000
exec sp_getapplock #Resource=N'HangFire:locks:schedulepoller',#DbPrincipal=N'public',#LockMode=N'Exclusive',#LockOwner=N'Session',#LockTimeout=5000
exec sp_executesql N'select top (#count) Value from [HangFire].[Set] with (readcommittedlock, forceseek) where [Key] = #key and Score between #from and #to order by Score',N'#count int,#key nvarchar(4000),#from float,#to float',#count=1000,#key=N'recurring-jobs',#from=0,#to=1596053348
exec sp_executesql N'select top (#count) Value from [HangFire].[Set] with (readcommittedlock, forceseek) where [Key] = #key and Score between #from and #to order by Score',N'#count int,#key nvarchar(4000),#from float,#to float',#count=1000,#key=N'schedule',#from=0,#to=1596053348
exec sp_releaseapplock #Resource=N'HangFire:recurring-jobs:lock',#LockOwner=N'Session'
exec sp_releaseapplock #Resource=N'HangFire:locks:schedulepoller',#LockOwner=N'Session'
-- Then nothing is executed for about 8-10 seconds, then:
exec sp_executesql N'update [HangFire].Server set LastHeartbeat = #now where Id = #id',N'#now datetime,#id nvarchar(4000)',#now='2020-07-29 20:09:19.097',#id=N'ps12345:19764:fe362d1a-5ee4-4d97-b70d-134fdfab2b87'
-- Then about 500ms-2s later I get
exec sp_executesql N'delete top (1) JQ ... -- i.e. Same as first query
The update LastHeartbeat query is only there every second time (from just a brief inspection, maybe that’s not exactly right).
It looks like there’s at least 3 threads running the DELETE query against JQ, since I can see several RPC:Starting before the RPC:Completed, suggesting they’re being executed in parallel instead of sequentially.
I don’t know if that’s normal but seems weird as I thought we had just one ‘consumer’ of the jobs.
I only have one Queue in my dev environment, although in live we’d have 20-50 I’d guess.
Any suggestions on where I should look for the configuration that’s causing:
a) the 8-10s pause between checking for jobs
b) the number of threads that are checking for jobs - it seems like I have too many
After writing this I realised we were using an old version so I upgraded from 1.5.x to 1.7.12, upgraded the database, and changed the startup config to this:
app.UseHangfireDashboard();
GlobalConfiguration.Configuration
.UseSqlServerStorage(connstring, new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
UseRecommendedIsolationLevel = true,
PrepareSchemaIfNecessary = true, // Default value: true
EnableHeavyMigrations = true // Default value: false
})
.UseAutofacActivator(_container);
JobActivator.Current = new AutofacJobActivator(_container);
but if anything the problem is now worse. Or the same but faster: 20 calls to delete top (1) JQ... happen within about 1s now, then the other queries, then a 15s wait, then it starts all over again.
To be clear, the main problem is that if any jobs are added during that 15s delay then it'll take the remainder of that 15s before my job is executed. A second problem I think is it's hitting SQL Server more than needed: 20 times in a second is a bit much, for my needs at least.
(Cross-posted to hangfire forums)
If you don't set QueuePollInterval then Hangfire with sql server storage defaults to polling every 15s. So the first thing to do if you have this problem is set QueuePollInterval to something smaller, e.g. 1s.
But in my case even when I set that it wasn't having any effect. The reason for that was calling app.UseHangfireServer() before I was calling GlobalConfiguration.Configuration.UseSqlServerStorage() with the SqlServerStorageOptions.
When you call app.UseHangfireServer() it uses the current value of JobStorage.Current. My code had set that:
var storage = new SqlServerStorage(connstring);
JobStorage.Current = storage;
then later called
app.UseHangfireServer()
then later called
GlobalConfiguration.Configuration
.UseSqlServerStorage(connstring, new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
UseRecommendedIsolationLevel = true,
PrepareSchemaIfNecessary = true,
EnableHeavyMigrations = true
})
Reordering it to use SqlServerStorageOptions before app.UseHangfireServer() means the SqlServerStorageOptions take effect.
I would suggest checking the Hangfire BackgroundJobServerOptions to see what polling interval you have set up there. This will define the time before the hangfire server will check to see if there are any jobs in queue to execute.
From the documentation
Hangfire Docs
Hangfire Server periodically checks the schedule to enqueue scheduled jobs to their queues, allowing workers to
execute them. By default, check interval is equal to 15 seconds, but you can change it by setting the SchedulePollingInterval property on the options you pass to the BackgroundJobServer constructor:
var options = new BackgroundJobServerOptions
{
SchedulePollingInterval = TimeSpan.FromMinutes(1)
};
var server = new BackgroundJobServer(options);

Creating and running a "scheduled task" once

I have some long running (> 1 minute) tasks that use to be run fine through an xQuery in REST. However, we have now placed these servers behind an Amazon load balancer and because of the way Amazon load balancers work, no single query can have a duration exceeding 29 seconds. Amazon will just timeout the query.
NOTE: There is no control over this
So, the solution we came up with is for the xQuery to just trigger a scheduled task to run which works fine. I had thought this would work and it does with one exception --- using something like this:
declare function jobs:create-job ($xquery-resource as xs:string, $period as xs:integer, $job-name as xs:string, $job-parameters as element()?, $delay as xs:integer, $repeat as xs:integer) as xs:boolean {
let $jobstatus := scheduler:schedule-xquery-periodic-job($xquery-resource, $period, $job-name, $job-parameters, $delay, $repeat)
return $jobstatus
And setting the $repeat to 0, it runs once but the job named "$job-title" is still in the list of scheduled jobs as "COMPLETE". Trying to run the code again will error. The error is apparently that another job with the same name cannot be created. If I change the job name it will run, so I know it is the name that is causing the error. The schedule log shows:
<scheduler:job name="Create Vault">
<scheduler:trigger name="Create Vault Trigger">
<expression>30000</expression>
<state>COMPLETE</state>
<start>2019-08-22T20:07:14.775Z</start>
<end/>
<previous>2019-08-22T20:07:14.775Z</previous>
<next/>
<final/>
</scheduler:trigger>
</scheduler:job>
Now, is there a different way that we could execute an xQuery triggered only once so that we do not have this job name issue? Or a way to tell the scheduled task to self-destruct and remove itself? Otherwise we would need to write some complicated code to create another task to delete the job after it is run (or maybe the create task code should delete any $job-name jobs) or leave them behind and use some GUID on the name.
Update I
The cleanest way we found is this (essentially if the job exists when we go to create it, delete it and then create another one):
declare function jobs:create-job ($xquery-resource as xs:string, $period as xs:integer, $job-name as xs:string, $job-parameters as element()?, $delay as xs:integer, $repeat as xs:integer) as xs:boolean {
let $cleanjob:= if(count(scheduler:get-scheduled-jobs()//scheduler:job[#name=$job-name]) > 0) then scheduler:delete-scheduled-job($job-name) else true()
let $jobstatus := if($cleanjob) then scheduler:schedule-xquery-periodic-job($xquery-resource, $period, $job-name, $job-parameters, $delay, $repeat) else false()
return $jobstatus
};
I would say we should also check the status and not delete if it is running ... however these tasks are built to run on-demand but the on-demand is likely once a day at most. The longest task is formatting about 3000 document to PDFSs which takes maybe 20 mins. It is not likely anyone would clash with another running task but we could add that to be sure.
Or should the answer be examining util:eval-async() but that is confusing as it does not really say that it just "shells" out the execution. If it threads it out and waits for the thread then that will not work either.

seemingly simple Firestore query is very slow [duplicate]

I'm having slow performance issues with Firestore while retrieving basic data stored in a document compared to the realtime database with 1/10 ratio.
Using Firestore, it takes an average of 3000 ms on the first call
this.db.collection(‘testCol’)
.doc(‘testDoc’)
.valueChanges().forEach((data) => {
console.log(data);//3000 ms later
});
Using the realtime database, it takes an average of 300 ms on the first call
this.db.database.ref(‘/test’).once(‘value’).then(data => {
console.log(data); //300ms later
});
This is a screenshot of the network console :
I'm running the Javascript SDK v4.50 with AngularFire2 v5.0 rc.2.
Did anyone experience this issue ?
UPDATE: 12th Feb 2018 - iOS Firestore SDK v0.10.0
Similar to some other commenters, I've also noticed a slower response on the first get request (with subsequent requests taking ~100ms). For me it's not as bad as 30s, but maybe around 2-3s when I have good connectivity, which is enough to provide a bad user experience when my app starts up.
Firebase have advised that they're aware of this "cold start" issue and they're working on a long term fix for it - no ETA unfortunately. I think it's a separate issue that when I have poor connectivity, it can take ages (over 30s) before get requests decide to read from cache.
Whilst Firebase fix all these issues, I've started using the new disableNetwork() and enableNetwork() methods (available in Firestore v0.10.0) to manually control the online/offline state of Firebase. Though I've had to be very careful where I use it in my code, as there's a Firestore bug that can cause a crash under certain scenarios.
UPDATE: 15th Nov 2017 - iOS Firestore SDK v0.9.2
It seems the slow performance issue has now been fixed. I've re-run the tests described below and the time it takes for Firestore to return the 100 documents now seems to be consistently around 100ms.
Not sure if this was a fix in the latest SDK v0.9.2 or if it was a backend fix (or both), but I suggest everyone updates their Firebase pods. My app is noticeably more responsive - similar to the way it was on the Realtime DB.
I've also discovered Firestore to be much slower than Realtime DB, especially when reading from lots of documents.
Updated tests (with latest iOS Firestore SDK v0.9.0):
I set up a test project in iOS Swift using both RTDB and Firestore and ran 100 sequential read operations on each. For the RTDB, I tested the observeSingleEvent and observe methods on each of the 100 top level nodes. For Firestore, I used the getDocument and addSnapshotListener methods at each of the 100 documents in the TestCol collection. I ran the tests with disk persistence on and off. Please refer to the attached image, which shows the data structure for each database.
I ran the test 10 times for each database on the same device and a stable wifi network. Existing observers and listeners were destroyed before each new run.
Realtime DB observeSingleEvent method:
func rtdbObserveSingle() {
let start = UInt64(floor(Date().timeIntervalSince1970 * 1000))
print("Started reading from RTDB at: \(start)")
for i in 1...100 {
Database.database().reference().child(String(i)).observeSingleEvent(of: .value) { snapshot in
let time = UInt64(floor(Date().timeIntervalSince1970 * 1000))
let data = snapshot.value as? [String: String] ?? [:]
print("Data: \(data). Returned at: \(time)")
}
}
}
Realtime DB observe method:
func rtdbObserve() {
let start = UInt64(floor(Date().timeIntervalSince1970 * 1000))
print("Started reading from RTDB at: \(start)")
for i in 1...100 {
Database.database().reference().child(String(i)).observe(.value) { snapshot in
let time = UInt64(floor(Date().timeIntervalSince1970 * 1000))
let data = snapshot.value as? [String: String] ?? [:]
print("Data: \(data). Returned at: \(time)")
}
}
}
Firestore getDocument method:
func fsGetDocument() {
let start = UInt64(floor(Date().timeIntervalSince1970 * 1000))
print("Started reading from FS at: \(start)")
for i in 1...100 {
Firestore.firestore().collection("TestCol").document(String(i)).getDocument() { document, error in
let time = UInt64(floor(Date().timeIntervalSince1970 * 1000))
guard let document = document, document.exists && error == nil else {
print("Error: \(error?.localizedDescription ?? "nil"). Returned at: \(time)")
return
}
let data = document.data() as? [String: String] ?? [:]
print("Data: \(data). Returned at: \(time)")
}
}
}
Firestore addSnapshotListener method:
func fsAddSnapshotListener() {
let start = UInt64(floor(Date().timeIntervalSince1970 * 1000))
print("Started reading from FS at: \(start)")
for i in 1...100 {
Firestore.firestore().collection("TestCol").document(String(i)).addSnapshotListener() { document, error in
let time = UInt64(floor(Date().timeIntervalSince1970 * 1000))
guard let document = document, document.exists && error == nil else {
print("Error: \(error?.localizedDescription ?? "nil"). Returned at: \(time)")
return
}
let data = document.data() as? [String: String] ?? [:]
print("Data: \(data). Returned at: \(time)")
}
}
}
Each method essentially prints the unix timestamp in milliseconds when the method starts executing and then prints another unix timestamp when each read operation returns. I took the difference between the initial timestamp and the last timestamp to return.
RESULTS - Disk persistence disabled:
RESULTS - Disk persistence enabled:
Data Structure:
When the Firestore getDocument / addSnapshotListener methods get stuck, it seems to get stuck for durations that are roughly multiples of 30 seconds. Perhaps this could help the Firebase team isolate where in the SDK it's getting stuck?
Update Date March 02, 2018
It looks like this is a known issue and the engineers at Firestore are working on a fix. After a few email exchanges and code sharing with a Firestore engineer on this issue, this was his response as of today.
"You are actually correct. Upon further checking, this slowness on getDocuments() API is a known behavior in Cloud Firestore beta. Our engineers are aware of this performance issue tagged as "cold starts", but don't worry as we are doing our best to improve Firestore query performance.
We are already working on a long-term fix but I can't share any timelines or specifics at the moment. While Firestore is still on beta, expect that there will be more improvements to come."
So hopefully this will get knocked out soon.
Using Swift / iOS
After dealing with this for about 3 days it seems the issue is definitely the get() ie .getDocuments and .getDocument. Things I thought were causing the extreme yet intermittent delays but don't appear to be the case:
Not so great network connectivity
Repeated calls via looping over .getDocument()
Chaining get() calls
Firestore Cold starting
Fetching multiple documents (Fetching 1 small doc caused 20sec delays)
Caching (I disabled offline persistence but this did nothing.)
I was able to rule all of these out as I noticed this issue didn't happen with every Firestore database call I was making. Only retrievals using get(). For kicks I replaced .getDocument with .addSnapshotListener to retrieve my data and voila. Instant retrieval each time including the first call. No cold starts. So far no issues with the .addSnapshotListener, only getDocument(s).
For now, I'm simply dropping the .getDocument() where time is of the essence and replacing it with .addSnapshotListener then using
for document in querySnapshot!.documents{
// do some magical unicorn stuff here with my document.data()
}
... in order to keep moving until this gets worked out by Firestore.
Almost 3 years later, firestore being well out of beta and I can confirm that this horrible problem still persists ;-(
On our mobile app we use the javascript / node.js firebase client. After a lot of testing to find out why our app's startup time is around 10sec we identified what to attribute 70% of that time to... Well, to firebase's and firestore's performance and cold start issues:
firebase.auth().onAuthStateChanged() fires approx. after 1.5 - 2sec, already quite bad.
If it returns a user, we use its ID to get the user document from firestore. This is the first call to firestore and the corresponding get() takes 4 - 5sec. Subsequent get() of the same or other documents take approx. 500ms.
So in total the user initialization takes 6 - 7 sec, completely unacceptable. And we can't do anything about it. We can't test disabling persistence, since in the javascript client there's no such option, persistence is always enabled by default, so not calling enablePersistence() won't change anything.
I had this issue until this morning. My Firestore query via iOS/Swift would take around 20 seconds to complete a simple, fully indexed query - with non-proportional query times for 1 item returned - all the way up to 3,000.
My solution was to disable offline data persistence. In my case, it didn't suit the needs of our Firestore database - which has large portions of its data updated every day.
iOS & Android users have this option enabled by default, whilst web users have it disabled by default. It makes Firestore seem insanely slow if you're querying a huge collection of documents. Basically it caches a copy of whichever data you're querying (and whichever collection you're querying - I believe it caches all documents within) which can lead to high Memory usage.
In my case, it caused a huge wait for every query until the device had cached the data required - hence the non-proportional query times for the increasing numbers of items to return from the exact same collection. This is because it took the same amount of time to cache the collection in each query.
Offline Data - from the Cloud Firestore Docs
I performed some benchmarking to display this effect (with offline persistence enabled) from the same queried collection, but with different amounts of items returned using the .limit parameter:
Now at 100 items returned (with offline persistence disabled), my query takes less than 1 second to complete.
My Firestore query code is below:
let db = Firestore.firestore()
self.date = Date()
let ref = db.collection("collection").whereField("Int", isEqualTo: SomeInt).order(by: "AnotherInt", descending: true).limit(to: 100)
ref.getDocuments() { (querySnapshot, err) in
if let err = err {
print("Error getting documents: \(err)")
} else {
for document in querySnapshot!.documents {
let data = document.data()
//Do things
}
print("QUERY DONE")
let currentTime = Date()
let components = Calendar.current.dateComponents([.second], from: self.date, to: currentTime)
let seconds = components.second!
print("Elapsed time for Firestore query -> \(seconds)s")
// Benchmark result
}
}
well, from what I'm currently doing and research by using nexus 5X in emulator and real android phone Huawei P8,
Firestore and Cloud Storage are both give me a headache of slow response
when I do first document.get() and first storage.getDownloadUrl()
It give me more than 60 seconds response on each request. The slow response only happen in real android phone. Not in emulator. Another strange thing.
After the first encounter, the rest request is smooth.
Here is the simple code where I meet the slow response.
var dbuserref = dbFireStore.collection('user').where('email','==',email);
const querySnapshot = await dbuserref.get();
var url = await defaultStorage.ref(document.data().image_path).getDownloadURL();
I also found link that is researching the same.
https://reformatcode.com/code/android/firestore-document-get-performance

Get execution time of XQuery in eXist

How can I get a reliable measure of the execution time of an XQuery in eXist-db?
It seems like eXide takes into account even the render of the results in the browser, am I wrong?
eXide measures only the time required to execute the query, not to render the results in the browser or serialize the results. (To confirm, see the eXide source where queries are executed and the duration is measured: https://github.com/wolfgangmm/eXide/blob/develop/controller.xql#L155-L193. The first timestamp taken on line 159 and the 2nd on 189-90.)
You can measure the duration of your own queries using this same technique:
xquery version "3.1";
let $start-time := util:system-time()
let $query-needing-measurement := (: insert query or function call here :)
let $end-time := util:system-time()
let $duration := $end-time - $start-time
let $seconds := $duration div xs:dayTimeDuration("PT1S")
return
"Query completed in " || $seconds || "s."
Another common approach is to log this message or send it to the Monex app's console. For this, use util:log() (built-in) or console:log() (requires Monex, which if not already installed can be installed from the Dashboard > Package Manager).
Also, see the XQuery Wikibook's entry, https://en.wikibooks.org/wiki/XQuery/Timing_a_Query.
Note: Updated with suggestion by Gunther from comments below.

Sending XQuery resuest repeatedly to a BaseX database slows down over time

I have a BaseX database that it has about 500.000 event nodes and I use the following select:
for $b in //EventList/Event[Type = 'Measurement']
let $date as xs:dateTime := xs:dateTime($b/TimeStamp)
where $date ge xs:dateTime('"+startdate+"')
and $date le xs:dateTime('"+enddate+"')
return $b
The weird thing is that there is a big difference in the speed that the data are returned.
Sometimes I receive the data in 5 seconds and some times in 75 seconds for the exactly same request.
There is an external application that makes requests in a row.
I observe that when the application starts the data are returned fast.
As long as the application continues to make requests the data are returned slower.
Is there something with the connections that are not closed properly?
I use
final BaseXClient session = new BaseXClient("localhost", 1984, "..","..");
And in order to close the connection
session.close();

Resources