Executing two commands in one line - unix

I have a particular problem that requires me to run cd command with rm following afterwards. But I'm restrained by the fact that I have to execute both commands in a single line.
Let's say I have two folders in my current directory, "A" and "B". And within "A" folder, I have two more folders, "Fol1" and "Fol2", with a text file called "File1". And lastly, within "Fol1" and "Fol2", they both have a single text file (doesn't matter what they're called).
To give an illustration:
(Current dir)
A B
| |
---------------------------
| | |
Fol1 Fol2 File1
| |
FileA FileB
I want to go into "A", then remove everything remove everything except what's in "Fol1" and "Fol2".
I've found that to remove everything except certain directories you can run:
rm -r !(Fol1|Fol2)
And I saw on another post that you can use & to combine two commands together. So from the current directory, I decided to run:
cd A & rm -r !(Fol1|Fol2)
But when I ran those commands, I got:
[1] 17854
[1]+ Done cd A
And it ended up deleting "A" and "B" and everything else in it.
Is there something that I'm missing within the commands? Anything would be appreciated!

The ampersand just ran the cd command in the background, which started a subshell, changed directories, the exited. Then the rm -r ran in the current directory, which had no Fol1 or Fol2 to ignore. You want a semi-colon to separate commands like this:
cd A; rm -r !(Fol1|Fol2)

Related

Getting the previous working directory value

here is my simple command.
ls -lrth ../ | grep file | awk -F" " -v orig=`cd .. | pwd ` -v sort=`pwd` '{print $NF "," $7"/"$8"/"$9","orig"," sort }'
I'm trying to get the value of my previous path just above my current working directory.
current working directory = /home/PC1/Environment/Test1
what i want to get the value of pwd is /home/PC1/Environment and not want to hardcode it.
i tried to use cd .. | pwd but it still displays my current working directory not my previous working directory
can anyone help? some suggestions would be nice.
Use $(cd .. && pwd). You can also use $(cd - && pwd) to get your previous working directory even if it wasn't the parent of your current one. (In general, you should use $(...) instead of `...` to get command output; the latter interferes with quoting and doesn't nest, so can cause surprising results).
Your cd | pwd runs the cd and the pwd at the same time in different subshells, which is not what you want.

terminal command to act on filenames that don't contain text

I have a directory full of files with names such as:
file_name_is_001
file_name_001
file_name_is_002
file_name_002
file_name_is_003
file_name_003
I want to copy only the files that don't contain 'is'. I'm not sure how to do this. I have tried to search for it, but can't seem to google the right phrase to find the results.
Details depend on operating system, shell, etc.
For a unix system a quite verbose but easy to understand approach could look like this (please mind that I didn't test it):
mkdir some_temporary_directory
mv *_is_* some_temporary_directory
cp * where_ever_you_want_to_copy_it
mv some_temporary_directory/* .
rmdir some_temporary_directory
You can do this using bash. First, here's a command to get you a list of files that don't contain the text _is_:
ls | grep -v "_is_"
This takes the output of ls and matches all values with DO NOT contain _is_ using grep -v.
In order to then copy these files, we need to turn the lines output by grep into arguments of cp. We can do this using xargs:
ls | grep -v "_is_" | xargs -J % cp % new_folder
From the xargs man page, it is a tool to "build and execute command lines from standard input".

Run a command multiple times with arguments given from standard input

I remember seeing a unix command that would take lines from standard input and execute another command multiple times, with each line of input as the arguments. For the life of me I can't remember what the command was, but the syntax was something like this:
ls | multirun -r% rm %
In this case rm % was the command to run multiple times, and -r% was an option than means replace % with the input line (I don't remember what the real option was either, I'm just using -r as an example). The complete command would remove all files in the current by passing the name of each file in turn to rm (assuming, of course, that there are no directories in the current directory). What is the real name of multirun?
The command is called 'xargs' :-) and you can run it as following
ls | xargs echo I would love to rm -f the files

Efficient way of getting listing of files in large filesystem

What is the most efficient way to get a "ls"-like output of the most recently created files in a very large unix file system (100 thousand files +)?
Have tried ls -a and some other varients.
You can also use less to search and scroll it easily.
ls -la | less
If I'm understanding your question correctly try
ls -a | tail
More information here
If the files are in a single directory, then you can use:
ls -lt | less
the -t option to ls will sort the files by modification time and less will let you scroll through them
If the want recent files across an entire file system --- i.e., in different directories, then you can use the find command:
find dir -mtime 1 -print | xargs ls -ld
Substitute the directory where you want to start the search for "dir". The find command will print the names of all of the files that have been modified in the last day (-mtime 1 means modified in the last one day) and the xargs command will take that list of files and feed it to ls, giving you the ls-like output you want

Diff files present in two different directories

I have two directories with the same list of files. I need to compare all the files present in both the directories using the diff command. Is there a simple command line option to do it, or do I have to write a shell script to get the file listing and then iterate through them?
You can use the diff command for that:
diff -bur folder1/ folder2/
This will output a recursive diff that ignore spaces, with a unified context:
b flag means ignoring whitespace
u flag means a unified context (3 lines before and after)
r flag means recursive
If you are only interested to see the files that differ, you may use:
diff -qr dir_one dir_two | sort
Option "q" will only show the files that differ but not the content that differ, and "sort" will arrange the output alphabetically.
Diff has an option -r which is meant to do just that.
diff -r dir1 dir2
diff can not only compare two files, it can, by using the -r option, walk entire directory trees, recursively checking differences between subdirectories and files that occur at comparable points in each tree.
$ man diff
...
-r --recursive
Recursively compare any subdirectories found.
...
Another nice option is the über-diff-tool diffoscope:
$ diffoscope a b
It can also emit diffs as JSON, html, markdown, ...
If you specifically don't want to compare contents of files and only check which one are not present in both of the directories, you can compare lists of files, generated by another command.
diff <(find DIR1 -printf '%P\n' | sort) <(find DIR2 -printf '%P\n' | sort) | grep '^[<>]'
-printf '%P\n' tells find to not prefix output paths with the root directory.
I've also added sort to make sure the order of files will be the same in both calls of find.
The grep at the end removes information about identical input lines.
If it's GNU diff then you should just be able to point it at the two directories and use the -r option.
Otherwise, try using
for i in $(\ls -d ./dir1/*); do diff ${i} dir2; done
N.B. As pointed out by Dennis in the comments section, you don't actually need to do the command substitution on the ls. I've been doing this for so long that I'm pretty much doing this on autopilot and substituting the command I need to get my list of files for comparison.
Also I forgot to add that I do '\ls' to temporarily disable my alias of ls to GNU ls so that I lose the colour formatting info from the listing returned by GNU ls.
When working with git/svn or multiple git/svn instances on disk this has been one of the most useful things for me over the past 5-10 years, that somebody might find useful:
diff -burN /path/to/directory1 /path/to/directory2 | grep +++
or:
git diff /path/to/directory1 | grep +++
It gives you a snapshot of the different files that were touched without having to "less" or "more" the output. Then you just diff on the individual files.
In practice the question often arises together with some constraints. In that case following solution template may come in handy.
cd dir1
find . \( -name '*.txt' -o -iname '*.md' \) | xargs -i diff -u '{}' 'dir2/{}'
Here is a script to show differences between files in two folders. It works recursively. Change dir1 and dir2.
(search() { for i in $1/*; do [ -f "$i" ] && (diff "$1/${i##*/}" "$2/${i##*/}" || echo "files: $1/${i##*/} $2/${i##*/}"); [ -d "$i" ] && search "$1/${i##*/}" "$2/${i##*/}"; done }; search "dir1" "dir2" )
Try this:
diff -rq /path/to/folder1 /path/to/folder2

Resources