Word displays "Failed to Upload" message when saving using ITHit Server dll - webdav

Based on our testing the issue (Failed to Upload) seems to happen after the lock timer is expired. The document can be saved many times before the timer expires but once the lock time is exceeded, if the user attempts to Save a word document, a yellow Failed to Upload bar is displayed.
We have set the lock timer to be the current system time plus the lock time that Word is requesting (3600 seconds).
Timeout: Second-3600
Word is trying to issue a Lock and getting the 500 error as the result:
LOCK http://t430-dev10/TMS_71/Edit_WebDAV/000%20TEST%20ADD.doc HTTP/1.1
And getting a response of HTTP/1.1 500 Internal server error:
System.NullReferenceException: Object reference not set to an instance of an object.
at dn.i(IHierarchyItem A_0, DavContextBase A_1)
at dn.ProcessRequest(DavContextBase context, IHierarchyItem item)
at ITHit.WebDAV.Server.DavEngine.Run(DavContextBase context)
X-AspNet-Version: 4.0.30319
X-Engine: IT Hit WebDAV Server .Net v3.7.1780.0
We have also tried v3.9.2111 with the same results.
Base on that I'd like some advice on how to save the document after the lock timer expires. Also, can the lock extended so that the Save will upload the file? And/or can the server engine be fixed to allow the file upload?

Most likely this issue is caused by returning null from DavContextBase.GetHierarchyItem implementation. The item returned from GetHierarchyItem in LOCK request must also implement ILock interface.
Also note that after initial lock, MS Office is refreshing the lock from time to time, sending a new lock time, so the lock should not expire until the MS Office application is open. The Engine calls ILock.RefreshLock when MS Office refreshes the lock.

Related

StorageException on resuming Firebase upload task through WorkManager

I'm running a Firebase upload Task through WorkManager. On regular progress updates from UploadTask, I save the session uri in my Shared Preference.
When I switch internet off, Firebase handles the scenario itself and resumes the upload task when internet is turned back on.
But when I power off the phone while firebase is uploading the file and the power on, work manager restarts again and attempts to resume the previous Firebase upload task with last saved session uri and gives the following exception:
E/StorageException: StorageException has occurred.
An unknown error occurred, please check the HTTP result code and inner exception for server response.
Code: -13000 HttpResult: 200
E/StorageException: The server has terminated the upload session
java.io.IOException: The server has terminated the upload session
at com.google.firebase.storage.UploadTask.recoverStatus(com.google.firebase:firebase-storage##16.0.5:354)
at com.google.firebase.storage.UploadTask.run(com.google.firebase:firebase-storage##16.0.5:200)
at com.google.firebase.storage.StorageTask.lambda$getRunnable$7(com.google.firebase:firebase-storage##16.0.5:1106)
at com.google.firebase.storage.StorageTask$$Lambda$12.run(Unknown Source:2)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:798)
My session uri is saved whenever I get a progress callback from Firebase Upload Task. Maybe thats the problem that I didn't get the latest one when phone was powering off.
Will appreciate if anyone can help identifying the problem here.

Network error triggering the download report(report generation) action in server.R twice

I have the shiny application deployed on the Rshiny pro server. The main aim of the application is to process the input excel files and produce the report in the form of word document which has couple of tables and around 15 graphs rendered using the ggplot.
This application works perfect for the input excel files having less than approx. 3500-4500 rows for around 10 metrics.
Now, I am trying to process the excel file with around 4000-4500 rows for around 20 metrics. While processing this file, during report generation(Rmarkdown file processing) it's showing the network error on the UI only. Despite this error on the UI, in the back-end the report file is getting generated, but the generated report doesn't get downloaded. After this error, the report generation action is getting triggered automatically resulting in the generation of two reports which is again doesn't get downloaded.
So, from this observations, I came to the conclusion that on getting the network error, the download report(report generation and downloading) action is getting triggered again by the server.R.
Has anyone been through such strange situation? I am looking for guidance regarding the two problems here-
What can be the reason of getting the network error sometime only?
What is there, which is triggering the download report action twice?
Is there any option to specify the max. session timeout period?
I have found answers to above questions and I have already answered it here.
Though I would like to quickly answer questions in above explained context.
Reason for getting network error: User will be presented with the network error only if the computations(in this case report generation) doesn't get completed within the 45 seconds. This is because the http_keepalive_timeout parameter is not defined in the server configuration and the default value for http_keepalive_timeout parameter is 45 seconds.
Why download report action was getting triggered twice? : It is because the user session with the server was getting terminated during the computations which were happening after clicking the Download action button
. There is parameter called reconnect in the shiny server configuration which is enabled by default. When a user's connection to the server is interrupted, Shiny Server will offer them a dialog that allows them to reconnect to their existing Shiny session for 15 seconds. This implies that the server will keep the Shiny session active on the server for an extra 15 seconds after a user disconnects in case they reconnect. After the 15 seconds, the user's session will be reaped and they will be notified and offered an opportunity to refresh the page. If this setting is true, the server will immediately reap the session of any user who is disconnected.
You can read about it in the shiny server documentation.
Option to specify the max. session timeout period: Yes. There is a parameter called http_keepalive_timeout. It will allow you to specify the maximum session timeout period. You will need to add http_keepalive_timeout parameter to the shiny-server.conf at the top level with the timeout period you want in seconds as shown below.
http_keepalive_timeout 120;
Read more about http_keepalive_timeout here.

SQLite Exception when trying to open a connection with multiple processes

My scenario is simple, I have one process generating some data and putting it into the database (currently 5 seconds after the last one is finished) and then there are any number of processes opening a connection to read a single record to use internally (currently 5 seconds after the last one is finished). The database is located on the local drive and the OS is Windows Server 2012 R2.
With the reader processes I am occasionally receiving an error when connecting to the sqlite database, when the connection is opened an [FireDAC][Phys][SQLite] ERROR: unable to close due to unfinalized statements or unfinished backups exception is thrown and I'm stumped on the cause and the meaning of the error message (in the case of opening a connection).
My connection is created like so (in both the reader and writer application):
connection := TFDConnection.Create(nil);
connection.Params.Add('DriverID=SQLite');
connection.Params.Add('Database=' + aDatabasePath);
connection.Params.Add('OpenMode=CreateUTF16');
connection.Params.Add('LockingMode=Normal');
connection.Params.Add('JournalMode=WAL');
connection.Params.Add('Synchronous=Full');
connection.Params.Add('UpdateOptions.LockWait=True');
connection.Params.Add('BusyTimeout=30000');
connection.Params.Add('SQLiteAdvanced=temp_store=MEMORY');
connection.Params.Add('SQLiteAdvanced=page_size=4096');
connection.Params.Add('SQLiteAdvanced=auto_vacuum=FULL');
connection.Open();
After investigating the EFDDBEngineException that gets thrown there is only a single error in the list of errors and it contains ErrorCode=5 which sqlite errorcodes and sqlite result codes say are the SQLITE_BUSY error.
Investigating the callstack
ntdll.dll KiUserExceptionDispatcher
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalDisconnect
FireDAC.Phys TFDPhysConnection.ConnectBase
ntdll.dll KiUserExceptionDispatcher
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.PrepareBase
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.Prepare
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.Prepare
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalExecuteDirect
FireDAC.Phys.SQLite SetPragma
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalConnect
FireDAC.Phys TFDPhysConnection.ConnectBase
FireDAC.Phys TFDPhysConnection.DoConnect
FireDAC.Phys TFDPhysConnection.Open
FireDAC.Comp.Client TFDCustomConnection.DoInternalLogin
FireDAC.Comp.Client TFDCustomConnection.DoLogin
FireDAC.Comp.Client TFDCustomConnection.DoConnect
Data.DB TCustomConnection.SetConnected
FireDAC.Comp.Client TFDCustomConnection.SetConnected
Data.DB TCustomConnection.Open
It's obviously not liking something that is happening in TSQLiteStatement.PrepareBase which then results in TFDPhysConnection.ConnectBase attempting to cleanup whatever point the creating of the connection is up to but where would the unfinalized statement be?
I Close() and Free() every TFDQuery and the connection when I'm finished.
What am I missing?
On a side note because it is a problem for me. Once the error occurs the WAL and SHM files don't get collapsed into the database file, and if I try to run the reader application on my dev machine under the debugger pointing at the database in the shared folder it locks completely when trying to open a connection and ending all other readers and the writer process doesn't unlock it and then I need to restart my dev machine.

Error when running TcmReindex.exe

I am currently trying to get search working in my Tridion 2011 installation. I read in another article that I should run the TcmReIndex.exe tool in the Tridion/bin folder to re-index all my sites. So I tried this and it failed with a message box giving the following details
Unable to get list of Publication items.
Unable to Intialize TDSE object.
The wait operation timed out
Connection Timeout Expired. The timeout period elapsed while attempting to consume the pre-login handshake acknowledgement. This could be because the pre-login handshake failed or the server was unable to respond back in time. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=21054; handshake=35;
The wait operation timed out
A database error occurred while executing Stored Procedure "EDA_TRUSTEES_GETTRUSTEEETOKEN"
I have four fairly large publications (100 000+ items in total) which I am trying to index.
Any ideas?
Whenever I get "Unable to Intialize TDSE object." errors, I typically write a small test script using VBScript, and try running it on the CMS server. Whilst this does not directly solve the problem, it often gives some insight into the issue by logging information in the event viewer. Try creating a test.vbs file as follows and running it:
Set tdse = CreateObject("TDS.TDSE")
tdse.initialize()
msgbox(tdse.User.Description)
Set tdse = Nothing
If it throws any errors, please let me know, and it may help us solve the problem. If it gives you a popup with your user description, then I am completely barking up the wrong tree.
I haven't come to anything conclusive but it seems like my issue may have been a temporary one as it just started working. I did increase all timeouts in Tridion MMC > Timeout Settings by 100 times their amounts but I suspect that this wasn't the issue, when it works the connection is almost instant.
If anyone else has this issue
Restart the computer the content manager is installed on, try again.
Wait an hour or two, try again.
Increase timeouts, try again.
I've run the process a few more times and it seems to be working correctly.

Transaction with FTP adapter

I want to pull something from the server (no delete), parse the file in the pipeline component, process the file, if everything goes successfully, I want the adapter delete the file.
i am thinking to enlist the parsing into the pipeline context, this way, I am picturing if the file cannot be parsed, the file will not get to the message box, therefore it will be deemed as a failed transaction, question, will the adapter participate in this transaction? in other words, my goal is to instruct the adapter to delete the file from the server ONLY when the pipeline processed successfully (transaction is commited), the file is left untouched on the server if the pipeline failed (transaction is rolled back, no message is commited to msg box)
Is this achievable? thanks in advance
I think a little experiment is in order. BizTalk, as part of it's nature, will not delete anything until it has been peristed to the message box. That being said, persitence might happen before PipeLine execution. So, the receive adapter receives the file, perists it to the message box and deletes the file. The message might subsequently fail in the pipeline. If this is the case, then the message is a bad format and it will have to be subsequently resubmitted by the sender. If you want to keep this message, you'll have to pick it up with Failed Message Routing. You can then write it to a directory and implement a resubmit pattern. Or, you can pick up the file via Failed Message Routing and put it back on the FTP server (this is sort of a compensation step).
On the otherhand, if the pipeline fails and the message isn't deleted fromt he server... you're fine.

Resources