Receive XDMP-LOCKED error when no locks exist - xquery

I have a function that I wrote for generation of a sequential number. The function is as follows:
declare function generate-instrument-Id( $cnt as xs:int? )
as xs:int {
let $count := if( $cnt and $cnt > 0 ) then $cnt else 1
let $url := '/private/instrumentId-Sequence.xml'
(: this redirection is needed to write id in another
transaction context :)
return xdmp:invoke-function( function() {
let $id := fn:doc( $url )/instrument/#nextId
let $_ := xdmp:node-replace( $id
, attribute nextId { $id + $count } )
return $id
}
)
};
The function works fine from a qconsole window using the following test code:
let res := util:generate-instrument-Id( 1 )
return fn:error( fn:QName("test", $res ) )
i.e. it executes in another transaction context and updates the document correctly. However, when I try to call the same function from a REST service, it returns the following error message:
XDMP-LOCKED: xdmp:node-replace(fn:doc("/private/instrumentId-Sequence.xml")/instrument/#nextId, attribute{fn:QName("","nextId")}{"1228"}) -- Document or Directory is locked
Please note that I cleaned up every other piece of code from the service interface to isolate the problem and still receive the same error message.
So here are my questions:
Under what conditions this error is issued?
I am sure there are no locks are held on this document (or directory it is put under) by any other process, so what might trigger such a false alarm?
Since it works from qconsole, I assume if I replicate what it does when executing programs I could solve this problem, as well. Any documentation on how qconsole executes programs?
Thanks a lot
K.
PS: I use MarkLogic 9 on a windows server

After some pain, I discovered that the reason I received this error. It appears it was because there was indeed a lock placed on the directory "/" and that this lock is not a transaction lock.
As per documentation, it is a persistent lock that is acquired by the WebDAV server.I actually suspected that might be related to webDAV and I disabled the WebDAV services on the database, assuming that would release any locks those services would hold and I was able to write to doc using qconsole anyways.
It appears that the admin account has rights to ignore those persistent locks created by the webDAV server so that the function works from that context and that the disabling webDAV server would not release persistent locks it acquires.
So, all I had to do to solve the problem was to release the locks that were hanging around after I disabled the webDAV server.
Afterwards, I re-enabled the webdav server and the function continues to work OK, that would mean the wevDAV server acquires locks under only certain conditions which is not documented.
I thought, I should share this info to help others that might see the same problem

Related

How to limit Wordpress Cron jobs to only run on production server?

Using the WP Crontrol plugin I schedule a process that sends out reminders emails to users. It is working well, but everytime I need to test something using actual data, I am scared that the system will send out reminders that should not have been sent or have already been sent from the live system.
After restoring the backup from the production server, I quickly go to the SMTP plugin I am using and select the option that drops emails sent. That does the job, but there is still a risk that something gets sent before I manage to do that.
So, I am considering my options. One is to wrap the reminder function into a check to see if it is the production server. And only run the function when it is.
I could check using home_url(), and I know it will work because I use this approach for something else.
But I feel there is a better and more correct way, and kindly ask for advice.
I usually use this approach in my projects to separate the code that runs according to the development environment. First I create a constant in the file wp-config.php with the name WP_ENVIRONMENT and assign the value of development to it and then I recognize the execution environment using two helper functions :
function prefix_is_development() {
return defined("WP_ENVIRONMENT") && "development" ===
strtolower(WP_ENVIRONMENT);
}
function prefix_is_production() {
return !defined("WP_ENVIRONMENT") || "production" ===
strtolower(WP_ENVIRONMENT);
}

Start Process from .NET Web Application with Impersonalisation

I try to call an .exe file from a webapplication.
But I want the file called by the user that is impersonalisated by windoes authentication from the website.
Process process = new Process();
try
{
process.StartInfo.UseShellExecute = false;
process.StartInfo.FileName = ConfigData.PVDToBudgetDBexePath;
process.StartInfo.CreateNoWindow = false;
process.Start();
log.Info("Process started by " + WindowsIdentity.GetCurrent().Name + " with ID: " + process.Id);
process.WaitForExit();
log.Info("After WaitForExit Process ID: " + process.Id);
}
catch (Exception ex)
{
log.Error("Error executing file with message " + ex.Message);
}
Both info log texts are logged correctly. There is no error occuring.
But the called Program does not do anything. No logging, no writing in Database.
The user has executable rights on the file.
When I call the same Code from Development Server it works fine.
I use .Net 4.5 and IIS 7
I found posts concerning this topic only for very old versions of .Net and IIS and that could not help me.
What am i doing wrong?
Or how can I find out whats going wrong?
many thanks,
EDIT:
To better make clear what I intend:
I have this (self made) exe file that imports Data from Excel Sheets into a Database. That needs some time. While doing this it logs its Progress whith log4net also into the database.
I want an UI (web application) were the user can trigger the import.
on this UI there is also an ajax progressbar that shows the progress of the import takten from the log table in the database.
I want maximum one instance of this import process to run in the same time. So I have a function that checks wheter the process is still running.
If so it does not allow to start another process. If not you can start it again.
private bool IsRunning(string name)
{
Process[] processlist = Process.GetProcesses();
if (Process.GetProcessesByName(name).Length > 0 )
{
return true;
}
else
{
return false;
}
}
I solved the problem now by starting the exe file via TimeScheduler.
path = filepath to the exe file
arguments = arguments to start the exe file with
using Microsoft.Win32.TaskScheduler;
using (TaskService taskService = new TaskService())
{
var taskDefinition = taskService.NewTask();
taskDefinition.RegistrationInfo.Author = WindowsIdentity.GetCurrent().Name;
taskDefinition.RegistrationInfo.Description = "Runs exe file";
var action = new ExecAction(path, arguments);
taskDefinition.Actions.Add(action);
taskService.RootFolder.RegisterTaskDefinition("NameOfTask", taskDefinition);
//get task:
var task = taskService.RootFolder.GetTasks().Where(a => a.Name == "NameOfTask").FirstOrDefault();
try
{
task.Run();
}
catch (Exception ex)
{
log.Error("Error starting task in TaskScheduler with message: " + ex.Message);
}
}
If you mean by development server the web server that is launched by Visual Studio, than this gives you a false test case since that server is launched by Visual Studio and uses your Windows account to run, while a standard configured IIS does not run under a "user" account but a very limited system account (luckily !!). Even if the user is logged in with a domain account in your website, the IIS process will not run under this account (that wouldn't make sense anyway). That is the reason why this code will not run in IIS and will run in your development server. Even if you get the exe to launch, it will run using the system account of IIS since you didn't supply any account, which is a limited account which will again run the exe different than you expected.
You will have to use impersonation, if you really want to go this way, but you will have to launch that process "impersonating" the user that is logged in in the website, asuuming that user account used to login even makes sense at that point. E.g. if it is a domain account, this might work, but if you use some other kind of authentication, like forms authentication, this has no meaning on OS level and thus you cannot use those credentials for impersonation in IIS.
In my experience, and I have done this a few times, impersonation in IIS is always a bad thing and is always creating issues, the same goes for launching command line process by the way.Luckily there is always a better/alternative solution when you think about it. Also the wait for a process to end in your code is not really a good practice. What if the process blocks? It will block website.
Luckily there is always a better/alternative solution when you think about it. A better/possible solution here is to use message queuing for example, where you just push a message to execute the task, and on the other end an application which processes the messages, which might use this command line tool then. That application can run under any user account you want, without you having to let IIS run under a different account. Later on you must of course come back to find the result of the operation, but that can be done using a callback in the background of your website. though this solution is a little bigger than what you are trying to do, it will have a better result on almost every field (responsiveness of your site, maintainability, scalability,..) the only thing where it is worse is the lines of code that you will need, but that is seldomly a valid factor to take into account
If you write the appplication for excel processing yourself, you can use a table in the DB as some kind of queue instead of using a message bus. Your web application then just needs to add rows with all necesarry info for the process in that table, the status and progress being one of them. Extend your processing application to monitor this table continuously and as soon as it detects a new record, it can then start to do the necessary task and update the db accordingly progress and status and end result). This avoids the messaging sub-system, will work equally good and will avoid you to have to launch a process with impersonation, which was the evil thing to start with.
You can modify the excel process to a windows service so that it runs continuously and starts with the system, but, if you don't want to, there are also tools to run any command line application as a windows service).
This technique would be much easier than the impersonation and allows your website to run in it's protected environment

How to execute a query in eval.xqy file with app-server id

I need to run query with importing modules from pod.
Without importing modules if I run simple query with Database Id using below, it is working.
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$dataBaseId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
If I have import modules statements then it is throwing Error(File Not Found).
So I tried getting the app-server id and tried passing that instead of database-id as below,
let $queryParam := fn:concat("?query=",xdmp:url-encode($query),"&eval=",$serverId,":123")
let $url := fn:concat($hostcqport,"/eval.xqy",$queryParam)
let $response := xdmp:http-post($url, $options)[2]
How to pass the server-id to make the query executing against particular app-server.
Is this MarkLogic 8 or earlier (I ask because rewrite options on 8 allow for dynamic switching of module databases before execution (among lots of other amazing goodies). This may be what you want because you can look at the query parameters at this point and build logic into the rewite rules.
Otherwise, Can you explain in more detail what you are trying to accomplish in the end. By the time your code ran, it was already executed in the context of a particular App server - so asking to execute against a another app server by analysing the query parameters is a bit too late (because you are already using the app server).
[edit] The following is in response to the comments since provided. This is a messy response because the actual ticket and comments are still not a completely clear picture. But if you stitch them together, then a problem statement does now exist for which I can respond.
The original author of the question confirmed via comments that they are "trying to hit an app server on a different node than the one that you actually posted to"
OK.. This is the response to that clarification:
That is not possible. Your request is already being processed by a thread on the node that you hit with your http request. Marklogic is a cluster, but it does not share threads (or anything else for that matter). Choices are:
a redirect to the proper node
possibly use the current node to make the request on your behalf.
But that ties up the first thread and the thread on the other node and has the HTTP communication overhead - and you need to have an app server listening for this purpose.
If this is a fire-and-forget type of situation, then you can hit any node and save the data/request in a document in the DB using a URI naming convention that indicates what app server it is for, and by way of insert triggers (with a URI-prefix for their server id), pick up the request from the DB and process it.

Asynchronous requests not working using fsock - php - Nginx

I have a php script which does the accepted answer described here.
It doesn't work unless I add the following before fclose($fp)
while (!feof($fp)) {
$httpResponse .= fgets($fp, 128);
}
Even a blank for loop would do the job instead of the above!!
But whats the point? I wanted Async calls :(
To add to my pain, the same code is running fine without the above code snippet in an Apache driven environment.
Anybody knows if Nginx or php-fpm having a problem with such requests?
What you're looking for can only be done on Linux flavor systems with a PHP build that includes the Process Control functions (PCNTL library).
You'll find it's documentation here:
http://php.net/manual/en/book.pcntl.php
Specifically what you want to do is "fork" a process. This creates an identical copy of the current PHP script's process including all memory references and then allows both scripts to continue executing simultaneously.
The "parent" script is aware that it is still the primary script. And the "child" script (or scripts, you can do this as many times as you want) is aware that is is a child. This allows you to choose a different action for the parent and the child once the child is spun off and turned into a daemon.
To do this, you'd use something along these lines:
$pid = pcntl_fork(); //store the process ID of the child when the script forks
if ($pid == -1) {
die('could not fork'); // -1 return value means the process could not fork properly
} else if ($pid) {
// a process ID will only be set in the parent script. this is the main script that can output to the user's browser
} else {
// this is the child script executing. Any output from this script will NOT reach the user's browser
}
That will enable a script to spin off a child process that can continue executing along side (or long after) the parent script outputs it's content and exits.
You should keep in mind that these functions must be compiled into your PHP build and that the vast majority of hosting companies will not allow access to them on their servers. In order to use these functions, you generally will need to have a Virtual Private Server (VPS) or a Dedicated server. Not even cloud hosting setups will usually offer these functions as if used incorrectly (or maliciously) they can easily bring a server to it's knees.

Error #3119: Database file is currently locked

I have developed to applications in flex. The one application constantly retrieves data from the internet, while the other can be opened and closed when you want, both apps use the same database. The problem is that at random I get an Error #3119: Database file is currently locked. Is it not possible to have two stable connections in a Adobe AIR environment? Anyone has any solutions?
I think not. Not at once.
I know this is a really old question, but I ran into this issue myself and found a solution for it for those who may come across this. I hope this helps someone, because I know for me, all I could find on this topic was false information, like that given by Konrad. You can, in fact have multiple open database connections. Actually, in my application, I have an asynchronous connection used for writing data to the database (INSERT, UPDATE, DELETE), and a synchronous, read-only connection for reading from the database. On the asynchronous connection, for every execute, I always get an immediate lock by putting all statements in a transaction using
conn.begin(SQLTransactionLockType.IMMEDIATE);
This will allow you to read from the database while writing to it with another connection. Where I ran into a problem is when trying to read from the database from one connection after committing this async statement and before it actually finished writing the data. So, even though the documentation for SQLTransactionLockType.IMMEDIATE states you can still do reads while it is locked, you actually cannot while another statement is actively in the process of writing data.
I got around this by writing my own execute for the synchronous connection. It simply tries to execute, and if it fails due to Error #3119, try again until you succeed. Between each function call, the data will continue to be written to the database and eventually will no longer be busy. Here is the code for that function:
public static function execute(stmt:SQLStatement):void {
try {
stmt.execute();
} catch (e:SQLError) {
if(e.errorID == 3119) {
execute(stmt);
} else {
trace(e.details + "\n" + e.getStackTrace());
if(stmt.sqlConnection != null && stmt.sqlConnection.inTransaction) {
stmt.sqlConnection.rollback();
}
}
}
}
Another gotcha to watch out for with this error (if you're an idiot like me anyway) is to check if you've got the SQLite db file open in a db browser, which can lock the database, and cause this error (and hours of googling and irritation).

Resources