Unix To display student records - unix

Contents of sample input file(input.txt) - starting from following line,
Name|Class|School Name
Deepu|First|Meridian
Neethu|Second|Meridian
Sethu|First|DAV
Theekshana|Second|DAV
Teju|First|Sangamithra
I need to output the details of the student with the school name Sangamithra
in the below format. I am new to unix. So I need help.
Desired output:
Sangamithra|First|Teju

I think you are looking something like this one.
awk -F\| '{print $3"|"$2"|"$1}' filename
School Name|Class|Name
Meridian|First|Deepu
Meridian|Second|Neethu
DAV|First|Sethu
DAV|Second|Theekshana
Sangamithra|First|Teju

If you're just interested in the output, this can be achieved using grep:
grep "Sangamithra" input.txt
If you want the name to be first, you might need awk (tested):
grep "Sangamithra" input.txt | awk -F "|" '{print $3"|"$1"|"$2}'

Related

Unix command to parse string

I'm trying to figure out a command to parse the following file content:
Operation=GET
Type=HOME
Counters=CacheHit=0,Exception=1,Validated=0
I need to extract Exception=1 into its own line. I'm fiddling with awk, sed and grep but not making much progress. Does anyone have any tips on using any unix command to perform this?
Thanks
Since your file is close to bash syntax, there is a fun little trick you can do to make bash itself parse the file. First, use some program like tr to transform the input into a something bash can parse, and then "source" that, which will create shell variables you can expand later to get the values.
source <(tr , $'\n' < file_name_goes_here)
echo $Exception
Many ways to do this. Here is one assuming the file is called "file.txt". Grab the line you want, replace everything from the start of the line up to Except with just Except, then pull out the first field using comma as the delimiter.
$ grep Exception file.txt | sed 's/.*Except/Except/g' | cut -d, -f 1
Exception=1
If you wanted to use gawk:
$ grep Exception file.txt | sed 's/.*Except/Except/g' | gawk -F, '{print $1}'
Exception=1
or just using grep and sed:
$ grep Exception file.txt | sed 's/.*\(Exception=[0-9]*\).*/\1/g'
Exception=1
or as #sheltter reminded me:
$ egrep -o "Exception=[0-9]+" file.txt
Exception=1
No need to use a mix of commands.
awk -F, 'NR==2 {print RS$1}' RS="Exception" file
Exception=1
Here we split the line by the keyword we look for RS="Exception"
If the line has two record (only when keyword is found), then
print first field, separated using command, with Record selector.
PS This only works if you have one Exception field

delete first and last hyphen character from each column

I am trying to remove the first and last characters from two separate columns prior to them being saved to a file. The characters I need to remove are the hyphens. Due to hyphens in the results, I am unable to just remove all of them. Is there a more effective way to use awk for this?
my current thoughts are something similar to this command.
cat file.txt | awk -F '|' '{print $2, $4}' | sed 's/.//;s/.$//' > newfile.txt
file example
1-|-40939-23-|-column-3-|-column-4-|
2-|-9832651-23-|-column-3-|-column-4-|
current output
40939-23- -column-4
9832651-23- -column-4
desired output
40939-23 column-4
9832651-23 column-4
$ awk -F'-[|](-|$)' '{print $2, $4}' file
40939-23 column-4
9832651-23 column-4
Could you please try following and let me know if this helps.
awk -F"|" '{gsub(/^-|-$/,"",$2);gsub(/^-|-$/,"",$(NF-1));print $2,$(NF-1)}' Input_file
Solution 2nd: Using field numbers considering that your Input_file will be always same.
awk 'BEGIN{FS="[-|]";OFS="-"}{print $4 OFS $5 " " $12 OFS $13}' Input_file

Unix Awk Command

Awk, I am new this this command, I know it can list out the text file with condition, but i have no idea how to list them when there is a "," in between the text, how do you count the "," in as $1.
but if its email, email won't show for some reason, I am thinking maybe I should include the "," ?, i am not sure how to solve the problem, and don't know what the problem is.
for example i want to show customerid and customersname, i will use:
awk'{print $1,$2}'
Customerid, customersname, email
12312322, MIKE, example#gmail.com
51231221, CALVIN, example2#gmail.com
91234232, LISA, example3#gmail.com
12359432, DICK, example4#gmail.com
94123432, ORAN, example5#gmail.com
63242333, KEVIN, example6#gmail.com
You want to use the comma as separator? Use -F like that:
awk -F, '{print $1,$2}'
If you want comma and spaces as separator you can use a regex:
awk -F',[[:space:]]*' '{print $1,$2}'
I'm not sure whether I got your question properly. You can specifiy the input field separator using the -F command line option:
awk -F, '{print $1, $2}' your.csv
Output:
Customerid customersname
12312322 MIKE
51231221 CALVIN
91234232 LISA
12359432 DICK
94123432 ORAN
63242333 KEVIN
simply using FS:
awk 'BEGIN { FS="," } {print $1,$2}'
from man awk:
7. Builtin-variables
The following variables are built-in and initialized before program execution.
...
FS splits records into fields as a regular expression.
...
Here is the code needed
awk -F "," '{print $1,$2}' input.txt
Output:
Customerid, customersname
12312322, MIKE
51231221, CALVIN
91234232, LISA
12359432, DICK
94123432, ORAN
63242333, KEVIN
Explanation:
-F = Field separator
"," = using comma because columns are separated by ,
'{print $1,$2}' = display first and second column
input.txt = the file you want to pass
Hope its help.

Splitting unix output

I'm trying to extract an address from a file.
grep keyword /path/to/file
is how I'm finding the line of code I want. The output is something like
var=http://address
Is there a way I can get only the part directly after the = i.e. http://address , considering the keyword I'm greping for is both in the var and http://address parts
grep keyword /path/to/file | cut -d= -f2-
Just pipe to cut:
grep keyword /path/to/file | cut -d '=' -f 2
You can avoid the needless pipes:
awk -F= '/keyword/{print $2}' /path/to/file

Forcing the order of output fields from cut command

I want to do something like this:
cat abcd.txt | cut -f 2,1
and I want the order to be 2 and then 1 in the output. On the machine I am testing (FreeBSD 6), this is not happening (its printing in 1,2 order). Can you tell me how to do this?
I know I can always write a shell script to do this reversing, but I am looking for something using the 'cut' command options.
I think I am using version 5.2.1 of coreutils containing cut.
This can't be done using cut. According to the man page:
Selected input is written in the same order that it is read, and is
written exactly once.
Patching cut has been proposed many times, but even complete patches have been rejected.
Instead, you can do it using awk, like this:
awk '{print($2,"\t",$1)}' abcd.txt
Replace the \t with whatever you're using as field separator.
Lars' answer was great but I found an even better one. The issue with his is it matches \t\t as no columns. To fix this use the following:
awk -v OFS=" " -F"\t" '{print $2, $1}' abcd.txt
Where:
-F"\t" is what to cut on exactly (tabs).
-v OFS=" " is what to seperate with (two spaces)
Example:
echo 'A\tB\t\tD' | awk -v OFS=" " -F"\t" '{print $2, $4, $1, $3}'
This outputs:
B D A

Resources