Renaming files in directory with various endings on Mac? - unix

Trying to rename a set of files in a directory with various filetypes, all with one common word, say 'foo', to another word, say 'bar' on a MacBook Pro.
E.g.:
foo.txt
form_foo.plist
home_foo.png
images_foo.zip
->
bar.txt
form_bar.plist
home_bar.png
images_bar.zip
Any ideas?

Use with care:
ls | grep foo | while read -r name; do echo mv "$name" "${name//foo/bar}"; done
That will report the commands it will run when you omit "echo". Inspect
the results, then rerun with "echo" omitted. This makes no attempt to work
on files with newlines in the name, nor does it recurse into subdirectories. If you want to work with files whose name begins with ., add -a to the invocation of ls. For safety's sake, you may want to add -i to the invocation of mv. Certainly make a backup first.

I don't have access to a Mac, but under Ubuntu you can use the rename command for this. Here's the man page in case that command is available

Related

Piping the results of *nix commands into Vim's set of open files

I have a folder resembling this structure:
nietzsche.txt
kant.org
buddha.txt
kierkegaard.org
aristotle.txt
plato.org
I wish to read the text files that have the *.org extension, so I use the command:
ls | grep .org
The above command neatly sends the following to stdin:
> kant.org
> kierkegaard.org
> plato.org
I would like to open the files listed above in vim all at once - with the above given example, this is trivial; it would just mean typing out the list of files prefixed with "vim", for example:
vim kant.org kierkegaard.org plato.org
...but in my actual folder of articles there are several hundred plain text files, with the *.org and the *.txt extension. It isn't a matter of converting the org files to true plain text, it's trying to get vim to use the output of other commands through pipes. In reality, the conditions for generating the "books-to-read" list are far more complicated (ie. using date last read, author, date written etc) so a simple find and replace of org-to-txt wouldn't work, as I currently have a bash script to generate the list and spit it to stdout.
How would I get vim to accept the output of a command like grep as a list of files to open immediately?
In this specific example, ls | grep .org is pointless since you can simply do:
$ vim *.org
As for the general case, you would use $ man xargs on Unix-like systems:
$ <command that generates a list of files> | xargs -o vim
or:
$ <command that generates a list of files> | xargs vim --not-a-term
Note that xargs' -o and Vim's --not-a-term are more or less the opposite of each other. The former ensures that xargs passes a proper tty to Vim, while the latter ensures that Vim doesn't complain if there is no attached tty.
You can use command mode completion inside vim:
:e *.org<C-a>
Read more on :h c_CTRL-A

Trim a file name in Unix

I have a file with name
ROCKET_25_08:00.csv
I want to trim the name of the file to
ROCKET_25_.csv
I tried mv but mv is not what I required because there will be cases where the files may be more than one.
I want the name till the second _.
How to get that in unix.
Please advise.
There are some utilities that provide more flexible renaming. But one solution that won't use anything other but included UNIX tools (like sed) would be:
ls -d * | sed -re 's/^([^_]*_[^_]*_)(.*)(\....)$/mv -v \1\2\3 \1\3/' | bash
This will only work in one directory, it won't process subdirectories.
It's not at all clear what you are actually trying to do, but if you just want to remove text between the last underscore and the period, you can do:
f=ROCKET_25_08:00.csv
echo ${f%_*}_.csv

Grep: Recursive option produces unexpected behavior when fed pipe-input

I've been using this utility successfully for many years, in many environemnts. But I'm noticing that on one particular environment, it produces very unexpected results.
grep -r 'search-term1' . | grep 'search-term2'
The above code greps recursively for all instances of search-term1, in the current-dir. The results are then piped to another grep, which selects only those lines that also contain search-term2. This works exactly as I would expect.
grep -r 'search-term1' . | grep -r 'search-term2'
The only difference in the above code is that the -r recursive flag in specified in both grep commands. I would expect the behavior to not change for this particular case. After all, the input to the 2nd grep is a pipe-input, and there's nothing further to be found recursively.
I have been using the command successfully, for many years, in many different environments (both unix and mac-os). However, the most recent environment that I started working in (unix), breaks the above behavior. The second piped grep searches for all instances of search-term2, not only in the piped-input, but also all files in my current directory. Because of this, instead of getting only results that contain both search-terms, I get all results in current-dir that contain the 2nd search term.
Is there any reason why this one particular environment produces this odd behavior? Is there any way I can avoid this, while still preserving the -r flag?
FAQ:
Q: Why am I using the -r flag on a piped input?
Ans: I actually have grep saved as an alias, with many different options and flags that I always want to use as a default. The recursive flag is one of them. I would like to always use this alias, instead of having to type out all the flags every time.
Q: If you want to search for all instances matching both search terms, why not do (insert-superior-method-here) instead?
Ans: You're probably right. I'm sure there are things I can change in my usual habits that would workaround this issue. However, as intellectual curiosity, I would like to find out why recursive-greps-on-pipes work as intended on most environments, but not all, and if that can somehow be resolved.
The -r flag to grep changed in grep version 2.11 (release notes to implicitly use the working directory as the input if no file arguments are given.
If no file operand is given, and a command-line -r or equivalent
option is given, grep now searches the working directory.
You aren't giving the second grep any file arguments so it defaults to the current directory despite there being pipe input.
Try grep -r 'search-term1' . | grep -r 'search-term2' - as a workaround.
grep -r 'search-term1' . | grep -r -d skip 'search-term2' may also work around the problem.

Complex command execution in Makefile

I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;

Unix [Homework]: Get a list of /home/user/ directories in /etc/passwd

I'm very new to Unix, and currently taking a class learning the basics of the system and its commands.
I'm looking for a single command line to list off all of the user home directories in alphabetical order from the /etc/passwd directory. This applies only to the home directories, and not the contents within them. There should be no duplicate entries. I've tried many permutations of commands such as the following:
sort -d | find /etc/passwd /home/* -type -d | uniq | less
I've tried using -path, -name, removing -type, using -prune, and changing the search pattern to things like /home/*/$, but haven't gotten good results once. At best I can get a list of my own directory (complete with every directory inside it, which is bad), and the directories of the other students on the server (without the contained directories, which is good). I just can't get it to display the /home/user directories and nothing else for my own account.
Many thanks in advance.
/etc/passwd is a file. the home directory is usually at field/column 6, where ":" is the delimiter. When you are dealing with file structure that has distinct characters as delimiters, you should use a tool that can break your data down into smaller chunks for easier manipulation using fields and field delimiters. awk/cut etc, even using the shell with IFS variable set can do the job. eg
awk -F":" '{print $6}' /etc/passwd | sort
cut -d":" -f6 /etc/passwd |sort
using the shell to read the file
while IFS=":" read -r a b c d e home_dir g
do
echo $home_dir
done < /etc/passwd | sort
I think the tools you want are grep, tr and awk. Grep will give you lines from the file that actually contain home directories. tr will let you break up the delimiter into spaces, which makes each line easier to parse.
Awk is just one program that would help you display the results that you want.
Good luck :)
Another hint, try ls --color=auto /etc, passwd isn't the kind of file that you think it is. Directories show up in blue.
In Unix, find is a command for finding files under one or more directories. I think you are looking for a command for finding lines within a file that match a pattern? Look into the command grep.
sed 's|\(.[^:]*\):\(.[^:]*\):\(.*\):\(.[^:]*\):\(.[^:]*\)|\4|' /etc/passwd|sort
I think all this processing could be avoided. There is a utility to list directory contents.
ls -1 /home
If you'd like the order of the sorting reversed
ls -1r /home
Granted, this list out the name of just that directory name and doesn't include the '/home/', but that can be added back easily enough if desired with something like this
ls -1 /home | (while read line; do echo "/home/"$line; done)
I used something like :
ls -l -d $(cut -d':' -f6 /etc/passwd) 2>/dev/null | sort -u
The only thing I didn't do is to sort alphabetically, didn't figured that yet

Resources