Data Version Control (dvc) cannot push to remote storage because querying cache - dvc

I am setting up a remote storage with dvc using webdavs
I can connect to the remote storage from Finder.
I added the new remote and I see it when I check (dvc remote list)
But when I try to push data, I have the request for password with 0% Querying cache
It stays 0% forever. And when I enter the password, it ends with the following error:
ERROR: unexpected error - No connection with LINK_OF_REMOTE_STORAGE
The only thing I am thinking about is how to check if I can connect to the server from dvc and why querying cache never ends (maybe never starts even)

Related

airflow logs: break long logs into smaller multiple files

I see that airflow logs are stored at
base_log_folder/dag_id/task_id/date_time/1.log
i.e:
base_log_folder/dag_id={dag_id}/run_id={run_id}/task_id={task_id}/attempt={try_number}.log
Sometime my logs are huge and know its now a good idea to check them from the web ui, because the chrome cant handle so much size of logs.
I have access to the server and can check the logs.
So how can i break the longs into smaller files - v
i.e
{try_number}_1.log
{try_number}_2.log
{try_number}_3.log
...
Also noted that the log file {trynumber}.log, is only created when the task is completed.
while the task is running i can check the logs in the webui, but i dont see any file in the corresponding log folder.
So i need two things for logging from the server side:
break large log files into smaller files
see the log file live while the task is running, not only after the task is completed
In Airflow 2.4.0 there is an option to view full logs or only the first fragment thus huge logs are not loaded automatically:
Starting Airflow 2.5.0 the web UI also does auto tails for logs (PR)
Airflow does show live logs. If you will set for example a Sensor task that pokes resource you will see the poking attempts in the log when the task is running. It's important to note that there are local logs and remote logs (docs):
In the Airflow UI, remote logs take precedence over local logs when remote logging is enabled. If remote logs can not be found or accessed, local logs will be displayed. Note that logs are only sent to remote storage once a task is complete (including failure). In other words, remote logs for running tasks are unavailable (but local logs are available).
Huge logs are often a sign of not using log levels. If you have entries relevant for debugging then set DEBUG mode rather than INFO mode that way you can better control over the log size displayed in the UI using the AIRFLOW__LOGGING__LOGGING_LEVEL variable.

StorageException on resuming Firebase upload task through WorkManager

I'm running a Firebase upload Task through WorkManager. On regular progress updates from UploadTask, I save the session uri in my Shared Preference.
When I switch internet off, Firebase handles the scenario itself and resumes the upload task when internet is turned back on.
But when I power off the phone while firebase is uploading the file and the power on, work manager restarts again and attempts to resume the previous Firebase upload task with last saved session uri and gives the following exception:
E/StorageException: StorageException has occurred.
An unknown error occurred, please check the HTTP result code and inner exception for server response.
Code: -13000 HttpResult: 200
E/StorageException: The server has terminated the upload session
java.io.IOException: The server has terminated the upload session
at com.google.firebase.storage.UploadTask.recoverStatus(com.google.firebase:firebase-storage##16.0.5:354)
at com.google.firebase.storage.UploadTask.run(com.google.firebase:firebase-storage##16.0.5:200)
at com.google.firebase.storage.StorageTask.lambda$getRunnable$7(com.google.firebase:firebase-storage##16.0.5:1106)
at com.google.firebase.storage.StorageTask$$Lambda$12.run(Unknown Source:2)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:798)
My session uri is saved whenever I get a progress callback from Firebase Upload Task. Maybe thats the problem that I didn't get the latest one when phone was powering off.
Will appreciate if anyone can help identifying the problem here.

How do I handle Realm Object Server dying after accidentally performing a migration on iOS (Bad changeset error)?

Made the mistake of performing a destructive migration on a synchronized realm, which I just now learned I shouldn’t have done according to the docs' statement “However, if the migration makes a destructive change, the Realm will stop syncing with ROS, producing a Bad changeset received error”. The server won't restart our Realm Object Server and the logs say realm-object-server dead but pid file exists. We cannot even access ROS on web at this point.
Is there a way around this without re-installing our realm instance? Also, if the magnitude of this migration is so severe, is there not a way to give a warning to the developer?
Code Sample:
let config = Realm.Configuration(
syncConfiguration: SyncConfiguration(user: curUser, realmURL: RealmURL.userObjects), migrationBlock: { (migration, schema) in
// todo
})
When you perform a schema change, this results in an operation that is appended to the operation log maintained by a Realm. This first occurs on the client copy of a synchronized Realm and is then synced to Realm Object Server. If the operation is a destructive change, the server should simply reject the operation and return an error. The result is that the server's operation log is not affected but the client is now in a state where it can't continue to sync with the server. In this situation, the solution is to reset the client, which is easiest in development by deleting and reinstalling the app.
Your situation, however, sounds like a different problem. The fact that the server is unresponsive implies something else went wrong. You could try removing and reinstalling the server since this does not delete the data or configuration.

SQLite Exception when trying to open a connection with multiple processes

My scenario is simple, I have one process generating some data and putting it into the database (currently 5 seconds after the last one is finished) and then there are any number of processes opening a connection to read a single record to use internally (currently 5 seconds after the last one is finished). The database is located on the local drive and the OS is Windows Server 2012 R2.
With the reader processes I am occasionally receiving an error when connecting to the sqlite database, when the connection is opened an [FireDAC][Phys][SQLite] ERROR: unable to close due to unfinalized statements or unfinished backups exception is thrown and I'm stumped on the cause and the meaning of the error message (in the case of opening a connection).
My connection is created like so (in both the reader and writer application):
connection := TFDConnection.Create(nil);
connection.Params.Add('DriverID=SQLite');
connection.Params.Add('Database=' + aDatabasePath);
connection.Params.Add('OpenMode=CreateUTF16');
connection.Params.Add('LockingMode=Normal');
connection.Params.Add('JournalMode=WAL');
connection.Params.Add('Synchronous=Full');
connection.Params.Add('UpdateOptions.LockWait=True');
connection.Params.Add('BusyTimeout=30000');
connection.Params.Add('SQLiteAdvanced=temp_store=MEMORY');
connection.Params.Add('SQLiteAdvanced=page_size=4096');
connection.Params.Add('SQLiteAdvanced=auto_vacuum=FULL');
connection.Open();
After investigating the EFDDBEngineException that gets thrown there is only a single error in the list of errors and it contains ErrorCode=5 which sqlite errorcodes and sqlite result codes say are the SQLITE_BUSY error.
Investigating the callstack
ntdll.dll KiUserExceptionDispatcher
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalDisconnect
FireDAC.Phys TFDPhysConnection.ConnectBase
ntdll.dll KiUserExceptionDispatcher
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.PrepareBase
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.Prepare
FireDAC.Phys.SQLiteWrapper TSQLiteStatement.Prepare
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalExecuteDirect
FireDAC.Phys.SQLite SetPragma
FireDAC.Phys.SQLite TFDPhysSQLiteConnection.InternalConnect
FireDAC.Phys TFDPhysConnection.ConnectBase
FireDAC.Phys TFDPhysConnection.DoConnect
FireDAC.Phys TFDPhysConnection.Open
FireDAC.Comp.Client TFDCustomConnection.DoInternalLogin
FireDAC.Comp.Client TFDCustomConnection.DoLogin
FireDAC.Comp.Client TFDCustomConnection.DoConnect
Data.DB TCustomConnection.SetConnected
FireDAC.Comp.Client TFDCustomConnection.SetConnected
Data.DB TCustomConnection.Open
It's obviously not liking something that is happening in TSQLiteStatement.PrepareBase which then results in TFDPhysConnection.ConnectBase attempting to cleanup whatever point the creating of the connection is up to but where would the unfinalized statement be?
I Close() and Free() every TFDQuery and the connection when I'm finished.
What am I missing?
On a side note because it is a problem for me. Once the error occurs the WAL and SHM files don't get collapsed into the database file, and if I try to run the reader application on my dev machine under the debugger pointing at the database in the shared folder it locks completely when trying to open a connection and ending all other readers and the writer process doesn't unlock it and then I need to restart my dev machine.

Spring integration sftp move remote file issue

I'am using the inbound-channel-adapter from Spring Integration to retrieve files over sftp from a remote server. Everything works fine.
But I have an additional requirement: after a file is received on the local side, that file needs to moved to a "send" directory on the remote server.
The "SFTP Outbound Gateway" has the appropriate method for that move action, but my problem is when to call it.
Situation: 10 files on remote server, 0 on local server
When I start my application it will receive all 10 files from the remote server and write them to my local file system. Perfect.
Situation: 1 file on remote server, 10 on local server
In this situation the remote file is received, but for every file on the local file system the receive method of the QueueChannel is also called.
Example log from one file: (file1.zip)
18:12:52.118 [task-scheduler-1] INFO o.s.i.file.FileReadingMessageSource - Created message: [[Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.i.e.SourcePollingChannelAdapter - Poll resulted in Message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.integration.channel.QueueChannel - preSend on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][...]
18:12:52.119 [task-scheduler-1] DEBUG o.s.integration.channel.QueueChannel - postSend (sent=true) on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=...]
18:12:52.119 [main] DEBUG o.s.integration.channel.QueueChannel - postReceive on channel 'fromChannel', message: [Payload File content=C:\Downloads\sftpTest\file1.zip][Headers=......]
So even when the file it not physicaly retreived from the remote server, the channel.receive() method will still receive a message with that file as payload.
This confuses me, because I can't determine from the message if the file was already on the local file system or was just retrieved from the remote server.
I experimented using a custom org.springframework.messaging.support.ChannelInterceptorAdapter, FileFilter, ServiceActivator, but the problem still remains.
My application will process high volumes, so sending the received file to the required directory on the remote server is not an option. And also simply trying to move the file remote for every message that is locally received, is not an option since it will clutter the logfiles with exceptions of not being able to move the file. This way in case of an real error situation the problem will not be detected.
One solution might be a hook in the method copyFileToLocalDirectory of the org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.
There is a check performed if the remote file should be deleted and that loop is only called for the files that were actually transferred from the remote server. My attempts to override this method and add my move behaviour did not succeed, since Spring has already instantiated the classes
that will handle this.
So what is the best way to achieve this? I know the problem will probably be located between my keyboard and my chair, but I've run out of options and any help is highly appreciated.
Thanks a lot,
Frank
You would probably be better off using MGET and an outbound gateway to retrieve the files instead of using the inbound adapter which, as you say, is two-stage - synchronize, and emit message(s) for file(s) in the local dir (unless you use a persistent file list filter, in which case you'll only see "new" files).

Resources