I search for a very long time a way to properly save my data and to keep everything up to date.
Because I haven't found anything that corresponds to what I'm looking for, I'm asking you the question.
I have seen similar posts but nothing that solves the problem
I have multiple drive , USB key and hard drives. I have two hard drive with exactly the same content.
A lot of files on these drives are identical, like a KeePass file, administrative files, software and setups ...
I would like to be able to automatically synchronize some folders on these drive without manually start the synchro when I plug a drive.
I imagine a software where we would have the list of folders / drives in synchronization with eventual options for each synchronization slot (like to keep x versions of each, keep the newest ...)
And that automatically manage the synchronization when you plug and unplug the known drives
and above all if the software was portable it would be really interesting
there is freefilesync or autover, but it is far from what I'm looking for (besides having a freemium license, which I don't want)
So does such a software exist, or I just have to open a GitHub ?
Related
We need to be able to supply big files to our users. The files can easily grow to 2 or 3GB. These files are not movies or similiar. They are software needed to control and develop robots in an educational capacity.
We have some conflict in our project group in how we should approach this challenge. First of all, Bittorrent is not a solution for us (despite the goodness it could bring us). The files will be availiable through HTTP (not FTP) and via a filestream so we can control who gets access to the files.
As a former pirate in the early days of the internet i have often struggled with corrupt files and using filehashes and filesets to minimize the amount of redownload required. I advocate a small application that downloads and verifies a fileset and extracts the big install file once it is completely downloaded and verified.
My colleagues don't think this is nessecary and point to the TCP/IP protocols inherit capabiltities to avoid corrupt downloads. They also mention that Microsoft has moved away from a downloadmanager for their MSDN files.
Are corrupt downloads still a widespread issue or will the amount of time we spend creating a solution to this problem be wasted, compared to the amount of people who will actually be affected by it?
If a download manager is the way to go, what approach would you suggest we take?
-edit-
Just to clearify. Is downloading 3GB of data in one chunk, over HTTP a problem OR should we make our own EXE that downloads the big file in smaller chunks (and verifies them).
You do not need to go for your own download manager. You can use some really smart approach.
Split files in smaller chunks, let's say 100MB each. So even if a download is corrupted, user will end-up downloading with that particular chunk.
Most of web servers are capable of understanding and treating/serving range headers. You can recommend the users to use download manager / browser add-ons which can use this capacity. If your users are using unix/linux systems, wget is such a utility.
Its true that TCP/IP has capacities of preventing corruption but it basically assumes that network is still up and accessible. #2 mentioned above can be one possible work-around to the problems where network was completely down in middle of download.
And finally, it is always good to provide file hash to your users. This is not only to ensure the download but also to ensure the security of the software that you are distributing.
HTH
I'm going to create a Java program that allows "locking" a USB drive by making it's files accessible only with a password. Similar software that does this is USB safeguard.
Here is what I am thinking of doing:
Store all files into a single archive on the USB.
Encrypt the archive using AES or
blowfish
Hide the archive.
The problem is, how can I "unlock" the USB? What approach can I take here? Here is what I have thought of:
Ramdisk: It is very hard, if not impossible, to load a Ramdisk from an encrypted arhive. While it may be plausible in c++, I think it may be much harder in Java and might involve messing with the system classes, which would kill the compatibility of the software and defeat the whole purpose of using Java.
Loading the unencrypted archive onto the USB - Nobody likes waiting 10 minutes just to view a file on a USB. Copying all the files might take some time. Also, what about free space on the USB?
Loading unencrypted archive onto harddrive - While being very unsecure and error-prone, this looks like the only possible way to get it done.
Creating a custom file browser allowing the user to browse the archive - Do you use winrar to browse your files? Would you like doing it? No. Creating a custom file browser will take alot of time to create, and again, is an error-prone and user-unfriendly approach.
I can't think of any other way of doing this. Can anyone think of a better way? Note that this is going to be free and open-source software.
TrueCrypt is Free Open-Source software for storing encrypted files on a storage device (i.e. USB drive). It runs on Windows, Linux, and MacOS. TrueCrypt even allows hidden volumes. I would start with their source code, and proceed from there.
I would like some idea about how rsync compares to SyncML/Funambol, especially when it comes to bandwidth, sync over unstable network and multiple clients to one server.
This is to sync several mobile devices with a directory structure of growing text-files. (Se we essentially want as much as possible on the server, and inconsistent files is not really a problem, also we know where changes originates).
So far, it seems Funambol doesn't compress, doesn't handle partial updates, and it is difficult to handle interruptions in a file-transfer.
I know rsync doesn't go through the server, but I don't quite see how that is a disadvantage.
Olav,
rsync can:
Compress the data (as you said) - thus gaining better performances over the net.
Synchronize only the newest data within each file - thus, once again, saving time.
Can be ran by multiple users at the same time. It's a very basic backup software behavior.
And one of my favorites: work over a secure shell.
You might want to check Rsyncrypto, for compressing and encrypting at the same time.
Dotan
Our company is using some software that ONLY accepts input from an "Imaging Device" i.e. a TWAIN device (e.g. scanner).
The problem is that we are receiving our files digitally, so using an actual scanner would require us to print, scan, and shred documents that we already have on the computer, but not in the software.
I was curious if anybody has any idea of how we might be able to work around this problem in the meantime. My first thought was to find some way to trick the program into thinking we're using a scanner, via some new 'imaging device' that would just read in the file, and spit it out to the software, but I don't even know where to begin with that.
We put in a feature request, seeing as how this problem should obviously be addressed in the software itself, but the company is notorious for lagging pretty hard when it comes to updates.
The system used by scanners is called TWAIN, so you'd be looking for some sort of virtual twain driver.
A quick google search will produce several hits, I don't have any experience with the software myself so can't advise any further.
Two such providers I found via experts exchange:
http://www.twaintools.de
http://www.scanpoint-usa.com
OK, months late... but in case you are interested, I have a TWAIN driver framework/toolkit that might let you build this fairly easily, depending on just what your scanning app expects, and how hard it is to read images from your digital documents. It's a Microsoft Visual C++ project. No charge but you'd need our permission to redistribute a driver based on it: GenDS
The TWAIN Working Group also has a sample/skeleton driver, I think it's straight C - and used to have some rather bad bugs (Why I wrote mine ;-) but, it might have got better.
Look for the "sample data source and application" on their download page.
And of course I have a 'commercial' version of GenDS that I use to write TWAIN drivers on contract.
I'm planning on doing more coding from home but in order to do so, I need to be able to edit files on a Samba drive on our dev server. The problem I've run into with several editors is that the network latency causes the editor to lock up for long periods of time (Eclipse, TextMate). Some editors cope with this a lot better than others, but are there any file system or other tweaks I can make to minimize the impact of lag?
A few additional points:
There's a policy against having company data on personal machines, so I'd like to avoid checking out the code locally.
The mount is over a PPTP VPN connection.
Mounting to Linux or OS X client
Use a source control system — Subversion, Perforce, Git, Mercurial, Bazaar, etc. — so you're never editing code on a shared server. Instead you should be editing a local work area and committing changes to a repository located on the network.
Also, convince your company to adapt their policy such that company code is allowed on personal machines if it's on an encrypted volume. Encrypted disk images that you can use for this are trivial to create using Disk Utility, and can use strong cryptography. You can get even more security by not storing your encryption passphrase in your keychain, and instead typing it every time you mount the encrypted volume; this means that even if your local user account is compromised, as long as you don't have the volume mounted, nobody else will be able to mount it.
I did this all the time when I was consulting and none of my clients — some of whom had similar rules about company code — ever had a problem with it once I explained how things worked. (I think some of them even started using encrypted disk images even within their offices.)
Remate plugin simply disables this dreadful refresh-on-focus feature.
Download, unpack, doubleclick and choose "Disable Refresh on Regaining Focus" from "Window" menu (you can refresh manually by right-clicking project in drawer). Voila!
If you are accessing the data from your personal computer, it is in your RAM, so we will assume that you just can't store it on your hard drive, floppy, USB stick, etc.
Your solution is a RAM drive. Copy the files you need to edit there using whatever method you prefer (I would suggest source control) and then you can edit them without lag. When you are done commit them back to the server.
As was pointed out your editor may be caching changes to your temp directory, or maybe even your swap file (if it is in memory, then it can get swapped out). The solution to that is get a much larger RAM drive and run a Virtual Machine in the RAM drive. Not sure what OS you are running, but you can get a pretty slim install of most OS's if all you are doing is editing source code.
If you don't have enough RAM, then get a Gigabyte i-RAM solid state drive and remove the battery, that way it will lose everything when you power down.
Set your VMWare to not allow the OS to swap any of the virtual machine. Keep a baseline VM on your hard drive and copy it to your RAM drive before booting it up. Then you can use the hard drive in the VM like a hard drive, even though it is RAM.
Might be a good idea to run a secure erase on your RAM drive before powering down. Also keep in mind that they have found if you super cool a RAM chip before removing it from a functioning computer, and place it in a new computer quick enough, the data may still be intact.
I guess it all comes down to how detailed that policy is, and how it is interpreted.
Good luck!
Short answer: you can do no trick. CIFS is really geared towards LAN with a reasonably calm trafic, so you have zero chance to not suffer intermittent lag accessing a share through a VPN. The editor at some point needs to access the file in blocking IO, because it makes no real sense to do otherwise.
You could switch editor and use Emacs + TRAMP which is geared to work on remote files.