Unix command to capitalize first letter of file name - unix

I have a folder "activity" which contains files like getEmployee.java, getTable.java etc. I was wondering if there was a unix command that could replace the file Name to give me: GetEmployee.java, GetTable.java and so on.
I've tried mv getEmployee.java GetEmployee.java
However this is pretty cumbersome as you can imagine since i have almost 70 files. Is there a way in Unix I can do this? I usually use sed to replace stuff, but I don't think that works for filenames. Can someone please suggest an easier way please?

This is a shell script that will find *.java files in the local directory and alter them:
#!/bin/sh
find . -name "*.java" -print | gawk -F "/" '
{
new = sprintf( "%s%s", toupper( substr( $NF, 1, 1 ) ), substr( $NF, 2 ) )
cmd = sprintf( "mv %s %s", $NF, new )
# comment out the next two lines and uncomment the printf() to see the commands
cmd | getline ret_val
close( cmd )
#printf( "%s => ret_val = %s\n", cmd, ret_val"" )
} '
I saved it to a script named "alterjava", "chmod +x alterjava" then ran it on a directory of zero sized files I made up for testing. You can check the commands before running it by commenting out the cmd line and uncommenting the printf() line

Bash can perform case-replacement in parameter substitution, using ${^} to capitalise the first character:
#!/bin/bash
for i in *.java; do mv -v "$i" "${i^}"; done
Note that this is not standard POSIX; other shells need not have this feature

rename -vn 's/([a-z])(\w+.java)/\u$1$2/' *.java
remove -n to execute command

Related

Multiple find exec options: synology index command line

I want to index some files on my NAS after moving them in the correct folder.
My command would be something like :
find *.$ext -exec mv "{}" $path \; -exec synoindex -a $(echo $path)$(basename "{}") \;
The first part is working. All files with $ext extension are moved to destination $path.
But the second part, which is supposed to index these files in their new $path folder, does not work.
This is strange because :
{} contains the right value => the complete old path of each file processed
To make sure of that, I added a third part which only does : -exec echo {} \;
Executing separately $(echo $path)$(basename "{}") works, after replacing {} by one real
value taken for example => gives the the complete new path => syntax is correct
Executing separately synoindex -a $(echo $path)$(basename "{}") works, after replacing {} by
one real value taken for example => command is correct
Thanks for any idea.
Regards,
Your command substitutions $(echo $path) and $(basename "{}") are executed by your shell before find is executed.
And you don't need to echo the $path variable. You could execute a small shell script instead:
find . -type f -name "*.$ext" -exec sh -c '
targetpath=$1; shift # get the value of $path
for file; do
mv -i "$file" "$targetpath"
synoindex -a "$targetpath/${file##*/}"
done
' sh "$path" {} +
This starts find in the current directory . searching for regular files (-type f) ending with
the file extension $ext (-name "*.$ext")
and executes a small shell script passing the $path variable as first argument to the script.
The following arguments are the filepaths found by find.
The parameter expansion ${file##*/} removes the longest prefix */ from the file and the result is the basename.
If your $path variable already contains a trailing slash /, then omit the / after $targetpath.

Append "/" to end of directory

Completely noob question but, using ls piped to grep, I need to find files or directories that have all capitals in their name, and directories need to have "/" appended to indicate that it is a directory. Trying to append the "/" is the only part I am stuck on. Again, I apologize for the amateur question. I currently have ls | grep [A-Z] and the example out should be: BIRD, DOG, DOGDIR/
It's an interesting question because it's a somewhat difficult thing to accomplish with a bash one-liner.
Here's what I came up with. It doesn't seem very elegant, but I'm not sure how to improve.
find /animals -type d -or -type f \
| grep '/[A-Z]*$' \
| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' \
| sed -e 's/, *$//'; echo
I'll break that down for you
find /animals -type d -or -type f writes out, once per line, the directories and files it found in /animals (see below for my test environment dockerfile - I created /animals to match your desired output). Find can't do a regex match as far as I know on the name, so...
grep '/[A-Z]*$' filter's find's output so that only paths are shown where the last part of the file or directory name, after the final /, is all uppercase
xargs -I + bash -c '...' when you're in a shell and you want to use a "for" loop, chances are what you should be using is xargs. Learn it, know it, love it. xargs takes its input, separated by default by $IFS, and runs the command you give it for each piece of input . So this is going to run a bash shell for each path. that passed the grep filter. In my case, -I + will make xargs replace the literal '+' character with its current input filename. -I also makes it pass one at a time through xargs. For more information, see the xargs manual page.
'echo -n $(basename +)$( test -d + && echo -n /),\\ ' this is the inner bash script that will be run by xargs for each path that got through grep.
basename + cuts the directory component off the path; from your example output you don't want eg /animals/DOGDIR/, you want DOGDIR/. basename is the program that trims the directories for us.
test -d + && echo -n / checks to see whether + (remember xargs will replace it with filename) is a directory ,and if so, runs echo -n /. the -n argument to echo suppresses the newline, important to get the output in the CSV format you specified.
now we can put it all together to see that we're echo -n the output of basename + , with / appended, if it's a directory, and then , appended to that. All the echos run with -n to suppress newlines to keep output CSV looking.
| sed -e 's/, *$//'; echo is purely for formatting. Adding , to each individual output was an easy way to get the CSV, but it leaves us with a final , at the end of the list. The sed invocation removes , followed by any number of spaces at the end of the output so far - eg the entire output from all the xargs invocations. And since we never did output a newline at the end of that output, the final echo is adding that.
Usually in unix shells, you probably wouldn't want a CSV style output. You'd probably instead want a newline-separated output in most cases, one matching file per line, and that would be somewhat simpler to do because you wouldn't need all that faffing with -n and , to make it CSV style. But, valid requirement if the need is there.
FROM debian
RUN mkdir -p /animals
WORKDIR /animals
RUN mkdir -p DOGDIR lowerdir && touch DOGDIR/DOG DOGDIR/lowerDOG2 lowerdir/BIRD
ENTRYPOINT [ "/bin/bash" ]
CMD [ "-c" , "find /animals -type d -or -type f | grep '/[A-Z]*$'| xargs -I + bash -c 'echo -n $(basename +)$( test -d + && echo -n /),\\ ' | sed -e 's/, *$//'; echo"]
$ docker run --rm test
BIRD, DOGDIR/, DOG
You can start looking at
ls -F | grep -v "[[:lower:]]"
I did not add something for a comma-seperated line, because this is the wrong method: Parsing ls should be avoided ! It will go wrong for filenames like
I am a terribble filename,
with newlines inside me,
and the ls command combined with grep
will only show the last line
BECAUSE THIS LINE HAS NO LOWERCASE CHARACTERS
To get the files without a pipe, you can use
shopt -s extglob
ls -dp +([[:upper:]])
shopt -u extglob
An explanation of the extglob and uppercase can be found at https://unix.stackexchange.com/a/389071/57293
When you want the output in one line, you can get troubles with filenames that have newlines or commas in its name. You might want something like
# parsing ls, yes wrong and failing for some files
ls -dp +([[:upper:]]) | tr "\n" "," | sed 's/,$/\n/'

How to give output location of file in shell script?

I have a a Shell script that contain a Perl script and R script.
my Shell script R.sh:-
#!/bin/bash
./R.pl #calling Perl script
`perl -lane 'print $F[0]' /media/data/abc.cnv > /media/data/abc1.txt`;
#Shell script
Rscript R.r #calling R script
This is my R.pl (head):-
`export path=$PATH:/media/exe_folder/bin`;
print "Enter the path to your input file:";
$base_dir ="/media/exe_folder";
chomp($CEL_dir = <STDIN>);
opendir (DIR, "$CEL_dir") or die "Couldn't open directory $CEL_dir";
$cel_files = "$CEL_dir"."/cel_files.txt";
open(CEL,">$cel_files")|| die "cannot open $file to write";
print CEL "cel_files\n";
for ( grep { /^[\w\d]/ } readdir DIR ){
print CEL "$CEL_dir"."/$_\n";
}close (CEL);
The output of Perl script is input for Shell script and Shell's output is input for R script.
I want to run the Shell script by providing the input file name and output file name like :-
./R.sh home/folder/inputfile.txt home/folder2/output.txt
If folder contain many files then it will take only user define file and process it.
Is There is a way to do this?
I guess this is what you want:
#!/bin/bash
# command line parameters
_input_file=$1
_output_file=$2
# #TODO: not sure if this path is the one you intended...
_script_path=$(dirname $0)
# sanity checks
if [[ -z "${_input_file}" ]] ||
[[ -z "${_output_file}" ]]; then
echo 1>&2 "usage: $0 <input file> <output file>"
exit 1
fi
if [[ ! -r "${_input_file}" ]]; then
echo 1>&2 "ERROR: can't find input file '${input_file}'!"
exit 1
fi
# process input file
# 1. with Perl script (writes to STDOUT)
# 2. post-process with Perl filter
# 3. run R script (reads from STDIN, writes to STDOUT)
perl ${_script_path}/R.pl <"${_input_file}" | \
perl -lane 'print $F[0]' | \
Rscript ${_script_path}/R.r >"${_output_file}"
exit 0
Please see the notes how the called scripts should behave.
NOTE: I don't quite understand why you need to post-process the output of the Perl script with Perl filter. Why not integrate it directly into the Perl script itself?
BONUS CODE: this is how you would write the main loop in R.pl to act as proper filter, i.e. reading lines from STDIN and writing the result to STDOUT. You can use the same approach also in other languages, e.g. R.
#!/usr/bin/perl
use strict;
use warnings;
# read lines from STDIN
while (<STDIN>) {
chomp;
# add your processing code here that does something with $_, i.e. the line
# EXAMPLE: upper case first letter in all words on the line
s/\b([[:lower:]])/\U\1/;
# write result to STDOUT
print "$_\n";
}

Shell script to test delimiter characters in file names

thanks for reading & your suggestions. I'm moving files into respective directories, only some of the files use _ and some use - (underscore and hyphen) as delimiters. Is there a way to test for the different delimiters?
E.g.:
ParentDir
1897/
1898/
1994summer/
file-1897-001.txt
file-1897-002.txt
file-1898-001.txt
file-1898-002.txt
file_1994summer_001.txt
file_1994summer_002.txt
I've been processing with the following (verbose so I can understand it) shell script:
!/bin/sh
for f in *.jp2
do
base=${f%.txt}
echo "base fileName is $base"
fileName=`echo "$base" | cut -f 2 -d _`
echo "truncated fileName is $fileName"
dir=$fileName
echo "Directory is $dir"
mv -v "$f" "$dir"
sleep 1
done
When using the cut command, I'd like to be able to differentiate on the delimiter. Is that possible? Thanks in advance for your time & suggestions.
Cheers!
case "${fName}" in
*_* ) underscore_funnyFace_processing "${fName}" ;;
*-* ) hyphen_funnyFace_process "${fName}" ;;
* ) all_other_processing "${fName}" ;;
esac
I'm almost sure bourne shell supported case processing. Don't have access to one to test with.
The difference between bourne/bash/ksh/zsh would be in the shell wildcards patterns that each shell has as an extension to the basic patterns the bourne shell supported.
I hope this helps.

Unix - Nested Loop. One loop to untar then another to inspect each file in directory

I'm trying to loop round a load of tar files and then move the extracted files into a new folder, inspect them and delete them before moving onto the next tar.
Code is below:
for i in *
do
tar -zxvf $i
mv *TTF* encoded
cd encoded
for j in *
do
echo $j
done
rm -f *TTF*
cd ..
done
When it gets to the nested loop, it asks if I want to display all x possibilities. Clearly something is going wrong. Any ideas?
Did you write this in a text editor, then try to paste it into a terminal, by any chance? Did you use tabs to indent the lines? If so, try changing tabs to spaces, or just save the file as a shell script and then run it.
(The tab key invokes completion, which displays the "display all x possibilities" message if there are lots of completions that match.)
Run the 'cd' command and its following actions in a sub-shell, which means you don't have to do 'cd ..'. Also, it would probably be better to extract each tar file directly in the sub-directory.
for i in *.tar.gz
do
mkdir encoded
(
cd encoded
tar -zxvf ../$i
for j in *
do
echo $j
done
)
rm -fr encoded
done
This assumes only that the tar file doesn't contain any names with '..' in the paths of the files, which is very uncommon.
echo "Enter no of terms" read count for i in $(seq 1 $count) do t=expr $i - 1 for j in $(seq $t -1 0) do echo -n " " done j=expr $count + 1 x=expr $j - $i for k in $(seq 1 $x) do echo -n "* " done echo ""

Resources