When changing directories and traversing filepaths, I noticed that sometimes users apply cd foo/bar/ and sometimes cd foo/bar.
I was wondering what the difference was, if any? I presume there's no difference in the context of simply changing directories, but are there consequences of using each method elsewhere?
You're right, there is no difference when changing directories on the command line. You're also right in presuming there is no difference between:
file/path/example/ and file/path/example in other contexts too (where example represents a folder).
Related
Under many, most, or maybe all Unix file systems, if you iterate over the links in a directory, there will usually/always be at least two, one pointing to the current directory ("./"), and one back-pointing to the parent directory ("../"). Except maybe for the root, which would have only the first of these two links.
But it might be that this is not true under some other file systems that purport to comport with most Unix conventions (but don't quite).
Is there a directory somewhere in a Unix file system guaranteed to always be an empty directory and whose link count can always be read using, e.g., stat() or equivalent?
If so, one can check the link count and expect it to be 2. Or perhaps something else, which would allow a program to adjust its behavior accordingly.
There is no standard directory which is always empty -- but you could create one, if you needed to. One easy way to do this would be using the mkdtemp() function.
However, there is no guarantee that all directories will be using the same file system. For instance, if a FAT filesystem is mounted, directories corresponding to that file system may behave differently from other ones.
I have several TB of photos, spread throughout subfolders. Each photo has an original, a watermarked resized version, and a thumbnail.
Named as such:
img1001.jpg
img1001_w.jpg
img1001_t.jpg
DSC9876.jpg
DSC9876_w.jpg
DSC9876_t.jpg
etc etc.
What I need to do, is move all of the originals to a different server. Presumably rsync is the best tool for this?
Is it possible to rsync a directory, while excluding any files that end in _t.jpg or _w.jpg? I'm not concerned about possible edge cases where the original file ends with either of those, as there are no such cases in my data.
Or am I better off just rsync'ing the whole lot, and then selectively deleting the _t & _w files from the destination?
Thanks
Yes, rsync is a good choice. Also because it works incremental so you can stop and start it when needed.
By default rsync does not delete anything on remote, I believe.
Yes, you can sync whole directory structures.
It is possible exclude files or folders from syncing.
I think I'm using a command like
rsync -av [--exclude <excludes-file>] <source> <destination>
When working with cmake, is it better to work with one large CMakeLists.txt in the root of the project, or as seems to be seen in some places, having one in each subdirectory too?
I would assume something along the lines of for large projects, having one in each directory is better.
If so, where should the threshold be?
I would certainly go for using multiple CMakeListst.txt files.
As a rule of thumb I think you should go for one CMakeLists.txt (and thus subdirectory) per target. So, each library or executable has its own CMakeLists.txt.
You can then create one "master" CMakeLists.txt that includes all the others using the add_subdirectory call. If you take care that you order these statements correctly, you can easily reference previously defined targets in the other CMakeLists.txt file.
I have two dynamic views in ClearCase which, as far as I know, are supposed to be "equal".
One is supposed to look at the "Main branch" and one at some other branch (let's call it A).
I did a merge from A to Main (in the Main view) and for some reason the code at the A view compiles while Main does not.
Is there a way to compare the views for differences?
The simplest way is to use an external diff tool on those two views (like WinMerge or BeyondCompare on Windows, KDiff3 on Unix or Windows, ...).
I would actually create two new views (with the same config spec than the two initial views), to remove any "cache" effect, and start the comparison there.
Once that initial examen is done, I would start the compilation in those two views, and see if one of them still don't compile.
Don't forget that merging A to Main will not always result in the same set of files after the Merge.
It would be the same only if no evolution has taken place in Main since A started (or since the last merge from A to Main).
The setcs -current you mention will:
–cur/rent
causes the view_server to flush its caches and reevaluate the current config spec, which is stored in file config_spec in the view storage directory. This includes:
Evaluating time rules with nonabsolute specifications (for example, now, Tuesday)
Reevaluating –config rules, possibly selecting different derived objects than previously
Re-reading files named in include rules
If you depend within your config spec on an "include file" which was at the wrong version, the first setcs would set it at the right version, and the second one would read its content and set the right version for the rest.
Is it a good practice that links should always point to absolute path rather than pointing from current directory?
I am talking this with reference - where i need to maintain software and all its previous versions should always point to latest version.
Define "good practice". Whether a link points to an absolute path or not depends on the relationship between the files.
If the files are always in the same relative positions, but could be moved around (eg aliases in bin/), they should be relative. If the actual file is in a known location (eg, you want to link a default config file to ~/.config), then use an absolute path.
Depends on what you want to do. My Linux system has both kinds of symlinks.
find /usr /bin /lib -type l -print0 | xargs -0 ls -ld | less
I don't think there's a "good practice" for that. There are different use cases.
For example if I store a tree with a program installed locally in /home/.../application, I may want to symlink "default config" to some specific config without an absolute path. That way, when I move the whole tree to /home/.../application-other-instance, the link to the config file stays correct.
On the other hand, if I want to reference some global file in /etc/... in a local dir, I will do it with an absolute path symlink. That guarantees that I'm pointing to the same file anywhere I move.
Just think what do you want to achieve, and the relative / absolute path decision will be either obvious, or irrelevant. Only "never do that" rule is probably: Never link to anything in root dir /xxx, via ../../../../../../../xxx
I don't think I can agree with that as a generalization.
There are definitely cases where a relative link makes more sense. With a directory tree of a project, for example. If the project is backed-up, or moved (even duplicated) into another place, absolute links could be confusing and/or disastrous.
Even if you are talking about system-wide tools, there will be cases where relative links make sense, and other times not. Compare having a link to a very generic tool, like grep , versus something like multiple versions and target flavors of the gnu compiler tools living on the same host. In the latter case, absolute links to the specific tool versions will probably required.
It all comes back to what you really want to do in each case. The general answer is that there is no generalized answer.