Duplicati Restore and zero byte files - duplicati

When we use Duplicati to restore a backup set, it always errs out on zero byte files.
We can restore these zero byte files individually, but not as part of backup set. That's to say that we can select zero byte files from the list, and those files will be restored, but, if we check the root folder, Duplicati errs out. We are seeing over 300 errors in the red modal box but cannot see any actual errors.
What it looks like to me is that Duplicati is throwing an exception, and exits, which would explain why we only see a couple folders and files restored from thousands.
Has anyone else experienced this behavior, and if so, were you able to work around this?
We are using Canary v2.0.4.18
Any help is greatly appreciated.

If you go to the log area, there should be more information about the error. If you see 300 errors, then clearly it does not just "quit", so something else must be happening. If you can get the real error message, maybe I can track it down.
I have not heard reports on this before, so maybe there is a simple explanation/fix/workaround.

Related

Kill turtle which causes runtime error

I'm curious as to whether there is a way of reporting the turtle which causes a runtime error?
I have a model which includes many agents and will run fine for hours, however sometimes a runtime error will occur. I have tried a few different things to fix it but always seems like an error occurs to the point I can't spare the time to fix it due to deadlines.
As the occurrence is so rare the easiest solution is to just write in the command center ask turtle X [die] after which I click GO and the problem is 'fixed'.
However I was wondering if anyone knew of a way to kill the turtle producing the error automatically every time a runtime error occurred to save me entering this manually.

Sudden degradation of network file-transfer rate

I've been using jcifs-1.3.17 for over a year now, transferring thousands of files from place A to place B, without problems. The file transfers have always taken from 15-30ms. Yesterday (7/13/16) something changed and the transfers are now taking about 90seconds - not milliseconds, but seconds. I can identify a 15-minute window in which the change happened. Operations insists that nothing in the network or on the servers changed. My code code didn't change, nor the character/size of the files.
Has anyone else experienced something like this? Any ideas on what I might look at to determine root cause?

is this the result of a partial image transfer?

I have code that generates thumbnails from JPEGs. It pulls an image from S3 and then generates the thumbs.
One in about every 3000 files ends up looking like this. It happens in batches. The high res looks like this and they're all resized down to low res. It does not fail on resize. I can go to my S3 bucket and see that the original file is indeed intact.
I had this code written in Ruby and ported it all over to clojure hoping it would just fix my issue but it's still happening.
What would result in a JPEG that looks like this?
I'm using standard image copying code like so
(with-open [in (clojure.java.io/input-stream uri)
out (clojure.java.io/output-stream file)]
(clojure.java.io/copy in out))
Would there be any way to detect the transfer didn't go well in clojure? Imagemagick? Any other command line tool?
My guess is it is one of 2 possible issues (you know your code, so you can probably rule one out quickly):
You are running out of memory. If the whole batch of processing is happening at once, the first few are probably not being released until the whole process is completed.
You are running out of time. You may be reaching your maximum execution time for the script.
Implementing some logging as the batches are processed could tell you when the issue happens and what the overall state is at that moment.

Can you sacrifice performance to get concurrency in Sqlite on a NFS?

I need to write a client/server app stored on a network file system. I am quite aware that this is a no-no, but was wondering if I could sacrifice performance (Hermes: "And this time I mean really slash.") to prevent data corruption.
I'm thinking something along the lines of:
Create a separate file in the system everytime a write is called (I'm willing do it for every connection if necessary)
Store the file name as the current millisecond timestamp
Check to see if the file with that time or earlier exists
If the same one exists wait a random time between 0 to 10 ms, and try again.
While file is the earliest timestamp, do work, delete file lock, otherwise wait 10ms and try again.
If a file persists for more than a minute, log as an error, stop until it is determined that the data is not corrupted by a person.
The problem I see is trying to maintain the previous state if something locks up. Or choosing to ignore it, if the state change was actually successful.
Is there a better way of doing this, that doesn't involve not doing it this way? Or has anyone written one of these with a lot less problems than the Sqlite FAQ warns about? Will these mitigations even factor in to preventing data corruption?
A couple of notes:
This must exist on an NSF, the why is not important because it is not my decision to make (it doesn't look like I was clear enough on that point).
The number of readers/writers on the system will be between 5 and 10 all reading and writing at the same time, but rarely on the same record.
There will only be clients and a shared memory space, there is no way to put a server on there, or use a server based RDMS, if there was, obviously I would do it in a New York minute.
The amount of data will initially start off at about 70 MB (plain text, uncompressed), it will grown continuous from there at a reasonable, but not tremendous rate.
I will accept an answer of "No, you can't gain reasonably guaranteed concurrency on an NFS by sacrificing performance" if it contains a detailed and reasonable explanation of why.
Yes, there is a better way. Don't use NFS to do this.
If you are willing to create a new file every time something changes, I expect that you have a small amount of data and/or very infrequent changes. If the data is small, why use SQLite at all? Why not just have files with node names and timestamps?
I think it would help if you described the real problem you are trying to solve a bit more. For example if you have many readers and one writer, there are other approaches.
What do you mean by "concurrency"? Do you actually mean "multiple readers/multiple writers", or can you get by with "multiple readers/one writer with limited latency"?

How do I interpret the results of the ANTS Memory Profiler?

I've been profiling my ASP.NET application with ANTS Memory Profiler 6, and have seen indications of memory leaks. However, I don't know whether or not the growths I'm seeing are supposed to be there or not (for instance, System.String grows a lot each snapshot. Should it?)
I don't understand the whole memory processso I don't know if I am interpreting the results correctly or not. How do I interpret the results of the ANTS Memory Profiler?
I have kind of been able to answer my own question while solving my memory issue. Although String may be on the top of the list most of the time, I shouldn't see the instance count just keep growing and growing. It turns out in my applcation that an Object I thought was being free'd actually wasn't which held a reference to some XML files which were of course held in Strings.
My test was to go to the home page of the web site -> Click to another page -> Back to the home page. Doing this should mean no new references should have been created (instance count should remain 0 (no growth)).
Hope this can help someone else.

Resources