Can Duplicati preserve file dates and times? - duplicati

This is a continuation of: Duplicati and Backup of live Pervasive Database Missing Data
We have attempted to restore a database data directory. What we are expecting to see in the backup is an exact mirror of what was backed up the night prior.
We are still not seeing all of the data in the restore directory. As far as we can tell, Duplicati seems to be using either the modified date, and or file size of each file when determining what files to back up. Can someone please confirm this one way or the other?
Is there a way to have Duplicati take a backup of only files whose metadata has changed, instead of using the file date and or file size?
Also, on the completion of every restore, there is a modal box that says "8500 Warnings" but we can't see all of them in the log.
What we can see in the log is:
MetadataWriteFailed
Failed to apply metadata to file
EDIT:
We uninstalled Duplicati Beta and installed Canary in its place. What we see now, is all of our data. All of the rows are being backed up, whereas in Beta, they are not; we are missing rows of data.
One other thing that we noticed was that when the Beta version restores, all of the date/time values for every file are set to the date and time of the restore. With Canary, all of the date/time values are preserved.
Using Canary, we no longer see the warning "MetadataWriteFailed Failed to apply metadata to file"
Is this intended behavior between both versions?

Is this intended behavior between both versions?
The canary build has a lot of fixes. I do not recall what was changed with the metadata restore, but if the metadata restore fails (as you see in the beta) that would leave the files with the restore date.
There should not be any changes as to what files are being backed up, so I am not sure what you mean with "we are missing rows of data".

Related

Link to latest master build

I have a product for which I store software builds in Artifactory.
I name the software artifacts like this, so it is possible to se what a downloaded file contains: system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
Thus it is also possible to download directly via an URL, not using jfrog
https://artifactory.deif.com/ui/native/amc-sw/pcm33/master/system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
Now I would like to make a quick way to download the latest master build. To do this I have in my build made a symlink
system-pcm33-base.raucb -> system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
I can also push this symlink to artifactory, but it only works from the GUI and via jfrog. I do not get this symlink as I had hoped:
https://artifactory.deif.com/ui/native/amc-sw/pcm33/master/system-pcm33-base.raucb
Is there a way to do this?
It is of course possible to upload the file twice under two different names, and thus update system-pcm33-base.raucb on every build. But that is a bit more heavy.
Artifactory doesn't handle symbolic links as in the Linux file system.
Based on the described use case, you can upload the file twice (as suggested) - first with the actual version, second as the latest. The important part is - when you upload for the 2nd time, as the latest, use Checksum Deploy.
Artifactory has a checksum based storage, which means that every file is actually stored once, even if it is uploaded to different target paths. In order to tell Artifactory to create/update a path without actually sending the binary, you can send the checksum of the binary, and Artifactory will link the path to the binary with that checksum. This operation is quite cheap.
Another possible approach is to define and use a custom Repository Layout. This way, in order to download the latest version of the file, you can use the [RELEASE] placeholder. The actual latest version ill be automatically resolved by the extracted version value based on the layout.
See also:
How to create simple versioning custom layout in Artifactory
How to find the latest artifact version based on layout?
Thanks to yinon explaining that the checksum is used, I found this simple solution
jf rt copy --flat amc-sw/pcm33/master/system-pcm33-base-v0.0.0.0_65_ga03970a.raucb amc-sw/pcm33/master/system-pcm33-base.raucb
This copies ALL the properties, but then a download query will return two files, so a property has to be changed
jf rt sp amc-sw/pcm33/master/system-pcm33-base.raucb artifact=last_bsp

How to check all artifacts in Artifactory if their files exist on disk?

We are running a local installation of Artifactory Pro which contains around 1M artifacts. Recently, we tried to migrate from the embedded Derby DB to Postgres and switched back to Derby because of errors occurring during the migration.
After that, users reported missing files, mostly maven-metadata.xml but also at least one pom.xml. The files are missing on the filesystem.
The only way I can think of is to query the Artifactory API for all files, try to download them and check if they can be downloaded. Is there a better way to check all artifacts in Artifactory if they exist on the filesystem?
Welcome, Thomas! 👋🏻
Although that kind of errors don't happen in normal operation, data migration back and forth of a large number of artifacts can lead to those problems sometimes.
We have a user plugin find them, so check it out, looks like it is exactly what you need.

Updating labels via XPO export

I have modified labels in my dev. environment along with other code changes but when I export this XPO and then import it in another environment, the labels in the target AOT are not updated.
If I open the XPO in Notepad, I indeed can see the newly modified labels. But at the time of import, the dialog does not detect changes it seems.
All labels ID's which I would want to be updated in the target are set to "Do not import" in the Details part of the import dialog.
If I have, 10, 20, 30 labels that changed, I would like to think AX would be smart enough to select "Use an existing label".
Any way to achieve this?
Thanks!
EDIT: Even when I manually set to "Use an existing label" and set the ID of the label to use, labels are not updated in the target :|
For AX 2009, instead of importing label using XPO's, I would recommend the following:
Use a version version system such as TFS (especially when working with multiple devs)
Setup up a build. (This could be an environment where you connect to your version control system and do a sync of all code that was checked in. Or create a script that uses combinexpo to compbine all xpo's for your version system and imports it)
You should now have a stable build environment -> copy the ald and aod files from here
Stop the aoses of you target environment, delete all .aoi, .ali, .alc and .alt files and copy/paste your ald file from your build into the target environment. I would suggest you do the same for aod files to move code.
The reason why you shouldn't be using xpo's for deployement is that it is prone to human error. XPO's should work so they aren't a problem themselves but they can cause problems because importing xpo's is a manual action.
The advantage of using source control is that you have traceability (you know what code is being delivered) and that it opens the door to having an automated build procedure (which will result in less errors form manually tranfering xpo's). With this build you can set up a daily build for your test environment, which again will improve the quality due to better testing. When all tests pass for a build, you have a tested build which you can then deliver using .aod files to your customer (no xpo's are used, so you are delivering the exact code you have tested).
Of course, it could be that setting up an automated build and such is overkill for you (I do think you should you version control always though) you can leave this out, the important thing is that you deliver code and labels from dev to test and all the way to you customer using aod and ald files.
my experienced procedure with updating labels in AX 2009 is following:
Copy the modified *.ald files (which contains labels and you should copy only the one you need-For example only EN-US + CS) from DEV to PROD. It doesn't depend whether AOS service is running or not.
That is all! The rest is done automatically when no user is connected (and no backgroud job is running) to AX for minute or so. Of course you can restart the AOS service to have it updated sooner, but in my case it is not neccessary.
Good luck!
I ended up copying the label file (.ALD) to the application directory of the target environment. I guess if I added or deleted labels, some other files than the .ALD files would need to be copied.
I have come across this issue a number of times. Please see the following blog entry in which I detail how to import labels as part of an XPO.
http://blog.m1cr0sux0r.com/2011/04/exporting-labels-with-xpos-in-dynamics.html

Fossil: "not a valid repository" - deleted repository

I'm trying fossil for the first time, and messed it up within minutes. I created a repository, then apparently ran commands in the wrong folders etc., eventually deleted the test repository, in order to restart. (Somewhere I had read that fossil was "self contained", so I thought, deleting a repository file would be ok. What is the correct way to delete a fossil repository?)
Now, with almost every command I try (incl. "all rebuild"), I get the error "not a valid repository" with the deleted repository name.
What now?
According to this post:
The "not a valid repository" error only arises
when Fossil tries to measure the size of the repository file and sees that
either the file does not exist or else that the size of the file is less
than 1024 bytes. It does this by calling stat() on the file and looking at
the stat.st_size field.
it seems likely that you have a missing or truncated Fossil file. Make sure you've actually deleted the repository file, and that your filesystem has actually released the file handles. Fossil stores some respository information in ~/.fossil, so you may need to remove that too.
rm ~/.fossil
In egregious cases, you may want reboot after deleting this file, just to be sure you're working with a clean slate.
If you're still having problems, try creating a new repository file in a different directory. For example:
cd /tmp
fossil init foo.fsl
fossil open foo.fsl
fossil close
If all that goes smoothly, you'll have to hunt down whatever remnants of the repository are lurking. As long as the file handles are closed, there's no reason you shouldn't be able to delete foo.fsl (or whatever) and call it good.
I have just experienced the exact same problem on Windows. I too seem to have found a solution. Here is what I did. I cannot guarantee that it is a universal solution or even a good one. In:
C:\Users\mywindowsusername\AppData\Local
There was a file named _fossil and a directory/folder named VirtualStore. I deleted both. This seems to have removed all traces of the repository. Note that the repository was still in the "open" state, as with your case.
Edit: After experimenting further, it would appear that VirtualStore is a temporary directory that will disappear after committing (a .fossil file will then appear inside the targeted directory).
My mistake was to create a repository at the root and clone: fossil proceeded to clone the entire C drive. Probably a common newbie mistake.

cleartool update error in Solaris Unix

I am working on a view created from the main code repository on a Solaris server. I have modified a part of the code on my view and now I wish to update the code in my view to have the latest code from the repository. However when I do
cleartool update .
from the current directory to update all the files in the current directory, some(not all) of the files do not get updated and the message I get is
Keeping hijacked object <filePath> - base no longer known.
I am very sure that I have not modified the directory structure in my view nor has it been modified on the server repository. One hack that I discovered is to move the files that could not be updated to a different filename(essentially meaning that files with original filename no longer exist on my view) and then run the update command. But I do not want to work this out one by one for all the files. This also means I will have to perform the merge myself.
Has someone encountered this problem before? Any advice will be highly appreciated.
Thanks in advance.
You should try a "cleartool update -overwrite" (see cleartool update), as it should force the update of all files, hijacked or not.
But this message, according to the IBM technote swg1PK94061, is the result of:
When you rename a directory in a snapshot view, updating the view will cause files in the to become hijacked.
Problem conclusion
Closing this APAR as No Plans To Fix (NPTF) because:
(a) to the simple workaround of deleting the local copy of renamed directories which will mitigate the snapshot view update problem and
(b) because of this issue's low relative priority with higher impact defects
So simply delete (or move) the directory you have rename, relaunch you update, and said directory (and its updated content) will be restored.
Thanks for your comment VonC. I did check out the link you mentioned, but I did not find it much useful as I had not renamed any directory. After spending the whole day yesterday, I figured out that I had modified some of the files previously without checking them out first. This made me to modify them forecfully as they were in read-only mode as they were not checked-out. This resulted in those files to become hijacked, and hence when I tried to update my view to look at all the modifications in the repository, it was unable to merge my modified file with that on the server as those files were modified without being checked out so the cleartool update was made to believe that the file is not modified(since it was not checked out) but actually it was. That was the fuss!! :)

Resources