Get last field using awk substr - unix

I am trying to use awk to get the name of a file given the absolute path to the file.
For example, when given the input path /home/parent/child/filename I would like to get filename
I have tried:
awk -F "/" '{print $5}' input
which works perfectly.
However, I am hard coding $5 which would be incorrect if my input has the following structure:
/home/parent/child1/child2/filename
So a generic solution requires always taking the last field (which will be the filename).
Is there a simple way to do this with the awk substr function?

Use the fact that awk splits the lines in fields based on a field separator, that you can define. Hence, defining the field separator to / you can say:
awk -F "/" '{print $NF}' input
as NF refers to the number of fields of the current record, printing $NF means printing the last one.
So given a file like this:
/home/parent/child1/child2/child3/filename
/home/parent/child1/child2/filename
/home/parent/child1/filename
This would be the output:
$ awk -F"/" '{print $NF}' file
filename
filename
filename

In this case it is better to use basename instead of awk:
$ basename /home/parent/child1/child2/filename
filename

If you're open to a Perl solution, here one similar to fedorqui's awk solution:
perl -F/ -lane 'print $F[-1]' input
-F/ specifies / as the field separator
$F[-1] is the last element in the #F autosplit array

Another option is to use bash parameter substitution.
$ foo="/home/parent/child/filename"
$ echo ${foo##*/}
filename
$ foo="/home/parent/child/child2/filename"
$ echo ${foo##*/}
filename

Like 5 years late, I know, thanks for all the proposals, I used to do this the following way:
$ echo /home/parent/child1/child2/filename | rev | cut -d '/' -f1 | rev
filename
Glad to notice there are better manners

It should be a comment to the basename answer but I haven't enough point.
If you do not use double quotes, basename will not work with path where there is space character:
$ basename /home/foo/bar foo/bar.png
bar
ok with quotes " "
$ basename "/home/foo/bar foo/bar.png"
bar.png
file example
$ cat a
/home/parent/child 1/child 2/child 3/filename1
/home/parent/child 1/child2/filename2
/home/parent/child1/filename3
$ while read b ; do basename "$b" ; done < a
filename1
filename2
filename3

I know I'm like 3 years late on this but....
you should consider parameter expansion, it's built-in and faster.
if your input is in a var, let's say, $var1, just do ${var1##*/}. Look below
$ var1='/home/parent/child1/filename'
$ echo ${var1##*/}
filename
$ var1='/home/parent/child1/child2/filename'
$ echo ${var1##*/}
filename
$ var1='/home/parent/child1/child2/child3/filename'
$ echo ${var1##*/}
filename

you can skip all of that complex regex :
echo '/home/parent/child1/child2/filename' |
mawk '$!_=$-_=$NF' FS='[/]'
filename
2nd to last :
mawk '$!--NF=$NF' FS='/'
child2
3rd last field :
echo '/home/parent/child1/child2/filename' |
mawk '$!--NF=$--NF' FS='[/]'
child1
4th-last :
mawk '$!--NF=$(--NF-!-FS)' FS='/'
echo '/home/parent/child000/child00/child0/child1/child2/filename' |
child0
echo '/home/parent/child1/child2/filename'
parent
major caveat :
- `gawk/nawk` has a slight discrepancy with `mawk` regarding
- how it tracks multiple,
- and potentially conflicting, decrements to `NF`,
- so other than the 1st solution regarding last field,
- the rest for now, are only applicable to `mawk-1/2`

just realized it's much much cleaner this way in mawk/gawk/nawk :
echo '/home/parent/child1/child2/filename' | …
'
awk ++NF FS='.+/' OFS= # updated such that
# root "/" still gets printed
'
filename

You can also use:
sed -n 's/.*\/\([^\/]\{1,\}\)$/\1/p'
or
sed -n 's/.*\/\([^\/]*\)$/\1/p'

Related

Split line with multiple delimiters in Unix

I have the below lines in a file
id=1234,name=abcd,age=76
id=4323,name=asdasd,age=43
except that the real file has many more tag=value fields on each line.
I want the final output to be like
id,name,age
1234,abcd,76
4323,asdasd,43
I want all values before (left of) the = to come out as separated with a , as the first row and all values after the (right side) of the = to come below for in each line
Is there a way to do it with awk or sed? Please let me know if for loop is required for the same?
I am working on Solaris 10; the local sed is not GNU sed (so there is no -r option, nor -E).
$ cat tst.awk
BEGIN { FS="[,=]"; OFS="," }
NR==1 {
for (i=1;i<NF;i+=2) {
printf "%s%s", $i, (i<(NF-1) ? OFS : ORS)
}
}
{
for (i=2;i<=NF;i+=2) {
printf "%s%s", $i, (i<NF ? OFS : ORS)
}
}
$ awk -f tst.awk file
id,name,age
1234,abcd,76
4323,asdasd,43
Assuming they don't really exist in your input, I removed the ...s etc. that were cluttering up your example before running the above. If that stuff really does exist in your input, clarify how you want the text "(n number of fields)" to be identified and removed (string match? position on line? something else?).
EDIT: since you like the brevity of the cat|head|sed; cat|sed approach posted in another answer, here's the equivalent in awk:
$ awk 'NR==1{h=$0;gsub(/=[^,]+/,"",h);print h} {gsub(/[^,]+=/,"")} 1' file
id,name,age
1234,abcd,76
4323,asdasd,43
FILE=yourfile.txt
# first line (header)
cat "$FILE" | head -n 1 | sed -r "s/=[^,]+//g"
# other lines (data)
cat "$FILE" | sed -r "s/[^,]+=//g"
sed -r '1 s/^/id,name,age\n/;s/id=|name=|age=//g' my_file
edit: or use
sed '1 s/^/id,name,age\n/;s/id=\|name=\|age=//g'
output
id,name,age
1234,abcd,76 ...(n number of fields)
4323,asdasd,43...
The following simply combines the best of the sed-based answers so far, showing you can have your cake and eat it too. If your sed does not support the -r option, chances are that -E will do the trick; all else failing, one can replace R+ by RR* where R is [^,]
sed -r '1s/=[^,]+//g; s/[^,]+=//g'
(That is, the portable incantation would be:
sed "1s/=[^,][^,]*//g; s/[^,][^,]*=//g"
)

In UNIX Terminal How to get a part of filename in a folder?

I have a list of n files in a folder have some format.
Eg: ABCD.EXXXX.ZZZZ.ZZZZZ.txt
in above file ABCD.E is common for all the files,ZZZZ.ZZZZ is user wish string and i need to extract XXXX from all the files , need to display distinct XXXX to user.. Is there any way to do so.? Help me out in doing so.. Thanks in advance..
Use ls -1 to make a list of the relevant files. Pipe it into sed to strip the beginning 'ABCD.E'. Then pipe it into sed again to remove everything after the first '.'
ls -1 ABCD\.E*\.txt | sed 's/^ABCD\.E//' | sed 's/\..*//'
Alternatively, if you want a little more control of the output you can do the second bit with awk
ls -1 ABCD\.E*\.txt | sed 's/^ABCD\.E//' | awk 'BEGIN{FS="."}{print "value =", $1, "user=", $2"."$3}'
awk -F"."'{print $2}' filename
You can try printing $1, $2 ,$3... to get more understanding of command.
You can use the bash/ksh parameter subsitution # and % for this from inside the shell.
function get_filename_section {
typeset f=${1:?}
typeset r=${f#ABCD.E}
print ${r%.ZZZZ.ZZZZZ.txt}
}
Testing:
[[ $( get_filename_section ABCD.EXXXX.ZZZZ.ZZZZZ.txt ) == XXXX ]] &&
echo ok || echo no

How to change the field sequence in cut command in unix

I want to print the fields in specific format ,
Input :
col1|col2|col3|col4
I used cat file | cut -d '|' -f 3,1,4
output :
col1|col3|col4
But my expected output is:
col3|col1|col4
Can anyone help me with this?
From man cut:
Selected input is written in the same order that it is read, and is written exactly once
You should do:
$ awk -F'|' -vOFS='|' '{print $3,$1,$4}' <<< "col1|col2|col3|col4"
col3|col1|col4
even though awk is good,here is a perl solution:
perl -F"\|" -ane 'print join "|",#F[2,0,3]'
tested:
> echo "col1|col2|col3|col4" | perl -F"\|" -ane 'print join "|",#F[2,0,3]'
col3|col1|col4

Is there a way to ignore header lines in a UNIX sort?

I have a fixed-width-field file which I'm trying to sort using the UNIX (Cygwin, in my case) sort utility.
The problem is there is a two-line header at the top of the file which is being sorted to the bottom of the file (as each header line begins with a colon).
Is there a way to tell sort either "pass the first two lines across unsorted" or to specify an ordering which sorts the colon lines to the top - the remaining lines are always start with a 6-digit numeric (which is actually the key I'm sorting on) if that helps.
Example:
:0:12345
:1:6:2:3:8:4:2
010005TSTDOG_FOOD01
500123TSTMY_RADAR00
222334NOTALINEOUT01
477821USASHUTTLES21
325611LVEANOTHERS00
should sort to:
:0:12345
:1:6:2:3:8:4:2
010005TSTDOG_FOOD01
222334NOTALINEOUT01
325611LVEANOTHERS00
477821USASHUTTLES21
500123TSTMY_RADAR00
(head -n 2 <file> && tail -n +3 <file> | sort) > newfile
The parentheses create a subshell, wrapping up the stdout so you can pipe it or redirect it as if it had come from a single command.
If you don't mind using awk, you can take advantage of awk's built-in pipe abilities
eg.
extract_data | awk 'NR<3{print $0;next}{print $0| "sort -r"}'
This prints the first two lines verbatim and pipes the rest through sort.
Note that this has the very specific advantage of being able to selectively sort parts
of a piped input. all the other methods suggested will only sort plain files which can be read multiple times. This works on anything.
In simple cases, sed can do the job elegantly:
your_script | (sed -u 1q; sort)
or equivalently,
cat your_data | (sed -u 1q; sort)
The key is in the 1q -- print first line (header) and quit (leaving the rest of the input to sort).
For the example given, 2q will do the trick.
The -u switch (unbuffered) is required for those seds (notably, GNU's) that would otherwise read the input in chunks, thereby consuming data that you want to go through sort instead.
Here is a version that works on piped data:
(read -r; printf "%s\n" "$REPLY"; sort)
If your header has multiple lines:
(for i in $(seq $HEADER_ROWS); do read -r; printf "%s\n" "$REPLY"; done; sort)
This solution is from here
You can use tail -n +3 <file> | sort ... (tail will output the file contents from the 3rd line).
head -2 <your_file> && nawk 'NR>2' <your_file> | sort
example:
> cat temp
10
8
1
2
3
4
5
> head -2 temp && nawk 'NR>2' temp | sort -r
10
8
5
4
3
2
1
It only takes 2 lines of code...
head -1 test.txt > a.tmp;
tail -n+2 test.txt | sort -n >> a.tmp;
For a numeric data, -n is required. For alpha sort, the -n is not required.
Example file:
$ cat test.txt
header
8
5
100
1
-1
Result:
$ cat a.tmp
header
-1
1
5
8
100
So here's a bash function where arguments are exactly like sort. Supporting files and pipes.
function skip_header_sort() {
if [[ $# -gt 0 ]] && [[ -f ${#: -1} ]]; then
local file=${#: -1}
set -- "${#:1:$(($#-1))}"
fi
awk -vsargs="$*" 'NR<2{print; next}{print | "sort "sargs}' $file
}
How it works. This line checks if there is at least one argument and if the last argument is a file.
if [[ $# -gt 0 ]] && [[ -f ${#: -1} ]]; then
This saves the file to separate argument. Since we're about to erase the last argument.
local file=${#: -1}
Here we remove the last argument. Since we don't want to pass it as a sort argument.
set -- "${#:1:$(($#-1))}"
Finally, we do the awk part, passing the arguments (minus the last argument if it was the file) to sort in awk. This was orignally suggested by Dave, and modified to take sort arguments. We rely on the fact that $file will be empty if we're piping, thus ignored.
awk -vsargs="$*" 'NR<2{print; next}{print | "sort "sargs}' $file
Example usage with a comma separated file.
$ cat /tmp/test
A,B,C
0,1,2
1,2,0
2,0,1
# SORT NUMERICALLY SECOND COLUMN
$ skip_header_sort -t, -nk2 /tmp/test
A,B,C
2,0,1
0,1,2
1,2,0
# SORT REVERSE NUMERICALLY THIRD COLUMN
$ cat /tmp/test | skip_header_sort -t, -nrk3
A,B,C
0,1,2
2,0,1
1,2,0
Here's a bash shell function derived from the other answers. It handles both files and pipes. First argument is the file name or '-' for stdin. Remaining arguments are passed to sort. A couple examples:
$ hsort myfile.txt
$ head -n 100 myfile.txt | hsort -
$ hsort myfile.txt -k 2,2 | head -n 20 | hsort - -r
The shell function:
hsort ()
{
if [ "$1" == "-h" ]; then
echo "Sort a file or standard input, treating the first line as a header.";
echo "The first argument is the file or '-' for standard input. Additional";
echo "arguments to sort follow the first argument, including other files.";
echo "File syntax : $ hsort file [sort-options] [file...]";
echo "STDIN syntax: $ hsort - [sort-options] [file...]";
return 0;
elif [ -f "$1" ]; then
local file=$1;
shift;
(head -n 1 $file && tail -n +2 $file | sort $*);
elif [ "$1" == "-" ]; then
shift;
(read -r; printf "%s\n" "$REPLY"; sort $*);
else
>&2 echo "Error. File not found: $1";
>&2 echo "Use either 'hsort <file> [sort-options]' or 'hsort - [sort-options]'";
return 1 ;
fi
}
This is the same as Ian Sherbin answer but my implementation is :-
cut -d'|' -f3,4,7 $arg1 | uniq > filetmp.tc
head -1 filetmp.tc > file.tc;
tail -n+2 filetmp.tc | sort -t"|" -k2,2 >> file.tc;
Another simple variation on all the others, reading a file once
HEADER_LINES=2
(head -n $HEADER_LINES; sort) < data-file.dat
With Python:
import sys
HEADER_ROWS=2
for _ in range(HEADER_ROWS):
sys.stdout.write(next(sys.stdin))
for row in sorted(sys.stdin):
sys.stdout.write(row)
cat file_name.txt | sed 1d | sort
This will do what you want.

How to sort characters in a string?

I would like to sort the characters in a string.
E.g.
echo cba | sort-command
abc
Is there a command that will allow me to do this or will I have to write an awk script to iterate over the string and sort it?
echo cba | grep -o . | sort |tr -d "\n"
Please find the following useful methods:
Shell
Sort string based on its characters:
echo cba | grep -o . | sort | tr -d "\n"
String separated by spaces:
echo 'dd aa cc bb' | tr " " "\n" | sort | tr "\n" " "
Perl
print (join "", sort split //,$_)
Ruby
ruby -e 'puts "dd aa cc bb".split(/\s+/).sort'
Bash
With bash you have to enumerate each character from a string, in general something like:
str="dd aa cc bb";
for (( i = 0; i < ${#str[#]}; i++ )); do echo "${str[$i]}"; done
For sorting array, please check: How to sort an array in bash?
This is cheating (because it uses Perl), but works. :-P
echo cba | perl -pe 'chomp; $_ = join "", sort split //'
Another perl one-liner
$ echo cba | perl -F -lane 'print sort #F'
abc
$ # for reverse order
$ echo xyz | perl -F -lane 'print reverse sort #F'
zyx
$ # or
$ echo xyz | perl -F -lane 'print sort {$b cmp $a} #F'
zyx
This will add newline to output as well, courtesy -l option
See Command switches for doc on all the options
The input is basically split character wise and saved in #F array
Then sorted #F is printed
This will also work line wise for given input file
$ cat ip.txt
idea
cold
spare
umbrella
$ perl -F -lane 'print sort #F' ip.txt
adei
cdlo
aeprs
abellmru
This would have been more appropriate as a comment to one of the grep -o . solutions (my reputation's not quite up to that low bar alas, damn my lurking), but I thought it worth mentioning that separating letters can be done more efficiently within the shell. It's always worth avoiding code, but this letsep function is pretty small:
letsep ()
{
INWORD="$1"
while [ "$INWORD" ]
do
echo ${INWORD:0:1}
INWORD=${INWORD#?}
done
}
. . . and outputs one letter per line for an input string of arbitrary length. For example, once letsep is defined, populating an array FLETRS with the letters of a string contained in variable FRED could be done (assuming contemporary bash) as:
readarray -t FLETRS < <(letsep $FRED)
. . . which for word-size strings runs about twice as fast as the equivalent :
readarray -t FLETRS < <(echo $FRED | grep -o .)
Whether this is worth setting up depends on the application. I've only measured this crudely, but the slower procedural code seems to maintain an advantage over the context switch up to ~60 chars (grep is obviously more efficient, but loading it is relatively expensive). If the above operation is taking place in one or more steps of a loop over an indeterminate number of executions, the difference in efficiency can add up (at which point some might argue for switching tools and rewriting regardless, but that's another set of tradeoffs).

Resources