How to add a file to a dvc-tracked folder without pulling the whole folder's content? - dvc

Let's say I am working inside a git/dvc repo. There is a folder data containing 100k small files. I track it with DVC as a single element, as recommended by the doc:
dvc add data
and because in my experience, DVC is kinda slow when tracking that many files one by one.
I clone the repo on another workspace, and now I have the data.dvc file locally but none of the actual files inside yet. I want to add a file named newfile.txt to the data folder and track it with DVC. Is there a way to do this without pulling the whole content of data locally ?
What I have tried for now:
Adding the data folder again:
mkdir data
mv path/to/newfile.txt data/newfile.txt
dvc add data
The data.dvc file is built again from the local state of data which only contains newfile.txt so this doesn't work.
Adding the file as a single element in data folder:
dvc add data/newfile.txt
I get :
Cannot add 'data/newfile.txt', because it is overlapping with other DVC tracked output: 'data'.
To include 'data/newfile.txt' in 'data', run 'dvc commit data.dvc'
Using dvc commit as suggested
mkdir data
mv path/to/newfile.txt data/newfile.txt
dvc commit data.dvc
Similarly as 1., the data.dvc is rebuilt again from local state of data.

I clone the repo on another workspace, and now I have the data.dvc file locally but none of the actual files inside yet (haven't dvc pulled). I want to add a file to the data folder and track it with DVC. Is there a way to do this without pulling the whole content of data locally ?
Interesting question. I think there is no easy way to do this now because in this other machine if you dvc add data again but with only one file in there, DVC will think you deleted all the other files, create a new cached version of the data dir (containing only the new file), and update the .dvc file accordingly (as you discovered).
You could open a feature request in https://github.com/iterative/dvc.org/issues.

Related

Delete cached data from DVC

I would like to be able to delete individual files or folders from the DVC cache, after they have been pulled with dvc pull, so they don't occupy space in local disk.
Let me make things more concrete and summarize the solutions I found so far. Imagine you have downloaded a data folder using something like:
dvc pull <my_data_folder.dvc>
This will place the downloaded data into .dvc/cache, and it will create a set of soft links in my_data_folder (if you have configured DVC to use soft links)
ls -l my_data_folder
You will see something like:
my_data_file_1.pk --> .dvc/cache/4f/7bc7702897bec7e0fae679e968d792
my_data_file_2.pk --> .dvc/cache/4f/7bc7702897bec7e0fae679e968d792
...
Imagine you don't need this data for a while, and you need to free its space from local disk. I know of two manual approaches for doing that, although I am not sure about the second one:
Preliminary step (optional)
Not needed if you have symlinks (which I believe is true, at least in unix-like OS):
dvc unprotect my_data_folder
Approach 1 (verified):
Delete all the cached data. From the repo's root folder:
rm -r my_data_folder
rm -rf .dvc/cache
This seems to work properly, and will completely free the disk space previously used by the downloaded data. Once we need the data again, we can pull it by doing dvc pull as previously. The drawback is that we are removing all the data downloaded with dvc so far, not only the data corresponding to my_data_folder, so we would need to do dvc pull for all the data again.
Approach 2 (NOT verified):
Delete only specific files (to be thoroughly tested that this does not corrupt DVC in any way):
First, take note of the path indicated in the soft link:
ls -l my_data_folder
You will see something like:
my_data_file_1.pk --> .dvc/cache/4f/7bc7702897bec7e0fae679e968d792
my_data_file_2.pk --> .dvc/cache/4f/7bc7702897bec7e0fae679e968d792
If you want to delete my_data_file_1.pk, from the repo's root folder run:
rm .dvc/cache/4f/7bc7702897bec7e0fae679e968d792
Note on dvc gc
For some reason, running dvc gc does not seem to delete the files from the cache, at least in my case.
I would appreciate if someone can suggest a better way, or also comment whether the second approach is actually appropriate. Also, if I want to delete the whole folder and not go file by file, is there any way to do that automatically?
Thank you!
It's not possible at the moment to granularly specify a directory / file to be removed from the cache. Here are the tickets to vote and ask to prioritize this:
dvc gc remove
Reconsider gc implementation
For some reason, running dvc gc does not seem to delete the files from the cache, at least in my case.
This is a bit concerning. If you run it with the -w option it keeps only files / dirs that are referenced in the current versions of the .dvc and dvc.lock files. And it should remove everything else.
So, let's say you are building a model:
my_model_file.pk
You created it once and its hash is 4f7bc7702897bec7e0fae679e968d792 and it's written in the dvc.lock or in the my_model_file.dvc.
Then you do another iteration and now hash is different 5a8cc7702897bec7e0faf679e968d363. It should be now written in the .dvc or lock. It means that a model that corresponds to the previous 4f7bc7702897bec7e0fae679e968d792 is not referenced anymore. In this case dvc gc -w should definitely collect it. If that is not happening please create a ticket and we'll try to reproduce and take a look.

Different local and remote organisation R Project and GitHub

I want to version control my R scripts so I've created an R project and a GitHub repo. My scripts are scattered through several directories within the same directory where the R project is.
I would like that my GitHub repository harbors only the scripts, independently of the folders they are locally stored in. However when I run the below command:
git add folder/file.R
git commit -m "my_message"
git push -u origin master
A directory named folder is created containing file.R but I'd like to just see file.R without the folder. Do you know how can I do this? Also, would it be good practice? My local folders are organized so each directory contains its own scripts and results, that's the reason the scripts are separated.
Thank you very much
is there a way to add the file.R without specifying the path?
Not using git add, no. The design constraint for git add is that it should store the file's name exactly as it appears, including the forward slashes, so if the file's name is folder/file.R, that's the file's name.
You have some options here though:
You can make a parallel directory where you put the files with the names you want them to have. Run git init in that directory, copy the folder/file.R file to file.R in that directory. Then cd ../gitdir or whatever is appropriate to get there, and git add file.R.
This method is probably the best because it's the simplest.
You can write your own programs using git hash-file -w and git update-index, which are two of Git's plumbing commands. A plumbing command, in Git, is basically a command that exists so that you can build user-facing commands: they're not meant to be run by humans but rather by other programs. So you write a program (in whatever language you like) that uses these plumbing programs to achieve whatever you want.
In particular, you can create or find a Git blob object holding the contents of file.R as read from anywhere you like, then use git update-index to create an index entry holding whatever path you like and referring to the blob object you created (or found) with git hash-object with the -w flag.
Since Git is a suite of tools, not a solution, you can come up with your own method. The tools in Git are made with particular approaches in mind, but they are flexible enough to be repurposed.

Revert a dvc remove -p command

I have just removed a DVC tracking file by mistake using the command dvc remove training_data.dvc -p, which led to all my training dataset gone completely. I know in Git, we can easily revert a deleted branch based on its hash. Does anyone know how to revert all my lost data in DVC?
You should be safe (at least data is not gone) most likely. From the dvc remove docs:
Note that it does not remove files from the DVC cache or remote storage (see dvc gc). However, remember to run dvc push to save the files you actually want to use or share in the future.
So, if you created training_data.dvc as with dvc add and/or dvc run and dvc remove -p didn't ask/warn you about anything, means that data is cached similar to Git in the .dvc/cache.
There are ways to retrieve it, but I would need to know a little bit more details - how exactly did you add your dataset? Did you commit training_data.dvc or it's completely gone? Was it the only data you have added so far? (happy to help you in comments).
Recovering a directory
First of all, here is the document that describes briefly how DVC stores directories in the cache.
What we can do is to find all .dir files in the .dvc/cache:
find .dvc/cache -type f -name "*.dir"
outputs something like:
.dvc/cache/20/b786b6e6f80e2b3fcf17827ad18597.dir
.dvc/cache/00/db872eebe1c914dd13617616bb8586.dir
.dvc/cache/2d/1764cb0fc973f68f31f5ff90ee0883.dir
(if the local cache is lost and we are restoring data from the remote storage, the same logic applies, commands (e.g. to find files on S3 with .dir extension) look different)
Each .dir file is a JSON with a content of one version of a directory (file names, hashes, etc). It has all the information needed to restore it. The next thing we need to do is to understand which one do we need. There is no one single rule for that, what I would recommend to check (and pick depending on your use case):
Check the date modified (if you remember when this data was added).
Check the content of those files - if you remember a specific file name that was present only in the directory you are looking for - just grep it.
Try to restore them one by one and check the directory content.
Okay, now let's imagine we decided that we want to restore .dvc/cache/20/b786b6e6f80e2b3fcf17827ad18597.dir, (e.g. because content of it looks like:
[
{"md5": "6f597d341ceb7d8fbbe88859a892ef81", "relpath": "test.tsv"}, {"md5": "32b715ef0d71ff4c9e61f55b09c15e75", "relpath": "train.tsv"}
]
and we want to get a directory with train.tsv).
The only thing we need to do is to create a .dvc file that references this directory:
outs:
- md5: 20b786b6e6f80e2b3fcf17827ad18597.dir
path: my-directory
(note, that path /20/b786b6e6f80e2b3fcf17827ad18597.dir became a hash value: 20b786b6e6f80e2b3fcf17827ad18597.dir)
And run dvc pull on this file.
That should be it.

Updating tracked dir in DVC

According to this tutorial when I update file I should remove file from under DVC control first (i.e. execute dvc unprotect <myfile>.dvc or dvc remove <myfile>.dvc) and then add it again via dvc add <mifile>. However It's not clear if I should apply the same workflow for the directories.
I have the directory under DVC control with the following structure:
data/
1.jpg
2.jpg
Should I run dvc unprotect data every time the directory content is updated?
More specifically I'm interested if I should run dvc unprotect data in the following use cases:
New file is added. For example if I put 3.jpg image in the data dir
File is deleted. For example if I delete 2.jpg image in the data dir
File is updated. For example if I edit 1.jpg image via graphic editor.
A combination of the previous use cases (i.e. some files are updated, other deleted and new files are added)
Only when file is updated - i.e. edit 1.jpg with your editor AND only if hadrlink or symlink cache type is enabled.
Please, check this link:
updating tracked files has to be carried out with caution to avoid data corruption when the DVC config option cache.type is set to hardlink or/and symlink
I would strongly recommend reading this document: Performance Optimization for Large Files it explains benefits of using hardlinks/symlinks.
Links above do not work anymore -> here is the up-to-date link and also pasting the instructions here:
Modifying content
Unlink the file with dvc unprotect. This will make train.tsv safe to edit:
dvc unprotect train.tsv
Then edit the content of the file, for example with:
echo "new data item" >> train.tsv
Add the new version of the file back with DVC:
dvc add train.tsv
git add train.tsv.dvc
git commit -m "modify train data"
If you have remote storage and/or an upstream repo:
dvc push
git push
Replacing files
If you want to replace the file altogether, you can take the following steps.
First, stop tracking the file by using dvc remove on the .dvc file. This will remove train.tsv from the workspace (and unlink it from the cache):
dvc remove train.tsv.dvc
Next, replace the file with new content:
echo new > train.tsv
And start tracking it again:
dvc add train.tsv
git add train.tsv.dvc .gitignore
git commit -m "new train data"
If you have remote storage and/or an upstream repo:
dvc push
git push

Git Archive but first put all the files inside a folder then start archiving

As title suggest, I want to know if there is a single git command that put all my project in one folder first (not including .gitignored files) and then proceed archiving the folder— leaving ignored files not included when archiving which is nice.
This can be beneficial for me as I am working on WordPress plugin with multiple release. Some references.
I want all the files (minus the .gitignored files) move to a folder first then proceed archiving that folder
It is possible in one command provided you define an alias but this isn't git-related:
you can:
clone your repo elsewhere (that way you don't get any ignored or private file)
move your files as you see fit in that local clone
archive (tar cpvf yourArchive.tar yourFolder)
But git archive alone won't help you move those files, which is why I would recommend a script with custom bash commands (not git commands).
You don't really need to copy / clone the repo anywhere.
Make sure you committed all your changes.
Process the files any way you want.
Run tar -cvjf dist/archive-name.tbz2 --transform='s,^,archive-name/,' $(git ls-tree --full-tree -r --name-only --full-name HEAD)
run git reset --hard to restore without any of the changes you made in step #2.
Hints:
The --transform='s,^,archive-name/,' is so your files will be extracted toarchive-name/....`, you can remove it if you don't need that.

Resources