BASH: Progres Bar - centos6

I want write a little progress bar for another script. On my Mac it works perfect, but on my CENTOS 6 Server it doesn't work. It comes the error "sleep: invalid time interval „1\r“
„sleep --help“ for more information"
Here is my script:
echo -ne "[*###############]\r"
sleep 1
echo -ne "[#*##############]\r"
sleep 1
echo -ne "[##*#############]\r"
sleep 1
echo -ne "[###*############]\r"
sleep 1
echo -ne "[####*###########]\r"
sleep 1
echo -ne "[#####*##########]\r"
sleep 1
echo -ne "[######*#########]\r"
sleep 1
echo -ne "[#######*########]\r"
sleep 1
echo -ne "[########*#######]\r"
sleep 1
echo -ne "[#########*######]\r"
sleep 1
echo -ne "[##########*#####]\r"
sleep 1
echo -ne "[###########*####]\r"
sleep 1
echo -ne "[############*###]\r"
sleep 1
echo -ne "[#############*##]\r"
sleep 1
echo -ne "[##############*#]\r"
sleep 1
echo -ne "[###############]\r"

Related

compare string from file and get group by of results using shell/bash

I have a file like below :
h1 a 1
h2 a 1
h1 b 2
h2 b 2
h1 c 3
h2 c 3
h1 c1 3
h2 c1 3
h1 c2 3
h2 c2 3
I need output like :
2 a 1
2 b 2
6 c 3
I have tried with bash , somehow its not giving me the expected results.
cat sample.log | awk '{print $2 , $3}' | sort | uniq -c
2
2 a 1
2 b 2
2 c 3
2 c1 3
2 c2 3
With below i am able to get the c* results, but a and b are missing .
cat sample.log | awk '$2="c" {print $2 , $3}' | sort -n | uniq -c | sort -n | tail -1
6 c 3
You may use this gnu-awk:
awk '{ ch=substr($2, 1, 1); ++freq[ch OFS $3] } END {
PROCINFO["sorted_in"] = "#ind_str_asc"; for (i in freq) print freq[i], i }' file
2 a 1
2 b 2
6 c 3
1st solution: Could you please try following.
awk '{sub(/[0-9]+/,"",$2);a[$2 OFS $3]++} END{for(i in a){print a[i],i}}' Input_file
Explanation: Adding detailed explanation for above.
awk ' ##Starting awk program from here.
{
sub(/[0-9]+/,"",$2) ##Substitute digits from 2nd field with NULL.
a[$2 OFS $3]++ ##Creating array with 2nd and 3rd field and increasing its occurence.
}
END{
for(i in a){ ##Starting for loop here.
print a[i],i ##Printing array a element with index i and index i here.
}
}
' Input_file ##Mentioning Input_file name here.
2nd solution: In case OP needs output in same sequence as Input_file then try following,
awk '
{
sub(/[0-9]+/,"",$2)
}
!a[$2 OFS $3]++{
count++
}
{
b[count]=$2 OFS $3
++c[$2 OFS $3]
}
END{
for(i=1;i<=count;i++){
print c[b[i]],b[i]
}
}
' Input_file
without awk
$ sed -E 's/[^ ]+ (.).* /\1 /' file | sort | uniq -c
2 a 1
2 b 2
6 c 3

Count number of not null rows column wise in a txt file in UNIX

I am trying to count the number of not null rows of all column in a txt file. I am able to read not null rows in each column individually but I am trying to loop them all together. awk - F "|" '$1!=""{N++} print N'
Here is a look at my data
A | B | C | D | E
1 | 2 | 0 | 8 |
5 | 3 | 6 | | 4
| | 8 | |
| 7 | 8 | |
8 | 9 | 2 | | 4
I want the result to be like :
Column A: 3
Column B: 4
Column C: 5
Column D: 1
Column E: 2
Your attempt is not working. Please remove the space between - F and call print N at the end using END :
awk -F "|" '$1!=""{N++} END {print N}' input.txt
This command will also count lines with some text missing a |.
An alternative would be
grep -cE "[^|]+\|" input.txt
If you want to check all columns of all lines, instead of a particular column:
awk -F'|' '{ for (i = 1; i <= NF; i++) if ($i != "") n++ } END { print n }' input.txt
For each line, loop over every |-delimited field in that line, incrementing a counter if it's not empty. Finally print the count at the end.

Unix count lines starting with same number

i have a text corpus and already sorted it by frequency:
tr ' ' '\n' < corpus.txt | sort | uniq -c | sort -nr
Now i want to count up all lines that start with the same number.
For example:
100 the
50 in
50 and
10 cat
10 dog
should return:
100 1
50 2
10 2
Is there a way to do it?
Thanks!
Easy with awk:
$ awk '{count[$1]++} END {for (i in count) print i, count[i]}' file
100 1
10 2
50 2
Just tweak your already written command:-
cut -d' ' -f1 corpus.txt| sort -rn | uniq -c
Required output is:-
1 100
2 50
2 10

AWK command for sum 2 files

i am new at awk and i need awk command to summing 2 files if found the same column
file 1
a | 16:00 | 24
b | 16:00 | 12
c | 16:00 | 32
file 2
b | 16:00 | 10
c | 16:00 | 5
d | 16:00 | 14
and the output should be
a | 16:00 | 24
b | 16:00 | 22
c | 16:00 | 37
d | 16:00 | 14
i have read some of the question here and still found the correct way to do it, i already tried with this command
awk 'BEGIN { FS = "," } ; FNR=NR{a[$1]=$2 FS $3;next}{print $0,a[$1]}'
please help me, thank you
This script also uses sort but it will work,
awk -F'|' ' { f[$1] += $3 ; g[$1] = $2 } END { for (a in f) { print a , "|", g[a] , "|", f[a] } } ' a.txt b.txt | sort
The results are
a | 16:00 | 24
b | 16:00 | 22
c | 16:00 | 37
d | 16:00 | 14
without |sort
awk -F'|' '{O[$1FS$2]+=$3}END{asorti(O,T,"#ind_str_asc");for(t in T)print T[t] FS O[T[t]]}' file[1,2]
Just store all the data in two arrays a[] and b[] and then print them back:
awk 'BEGIN{FS=OFS="|"}
{a[$1]+=$3; b[$1]=$2}
END{for (i in a) print i,b[i],a[i]}' f1 f2
Test
$ awk 'BEGIN{FS=OFS="|"} {a[$1]+=$3; b[$1]=$2} END{for (i in a) print i,b[i],a[i]}' f1 f2
b | 16:00 |22
c | 16:00 |37
d | 16:00 |14
a | 16:00 |24

Combine columns from multiple TXT files to a table using sed and awk

I have the files FileA.txt, FileB.txt, FileC.txt, etc., with the following column headers:
ID Value1 Value2 Value3
I want to combine select columns from these files on the ID column, retaining the file names as new column header, so I get the following table
ID Value1fromFileA Value1fromFileB Value1fromFileC
I can successfully, though not optimally, do this in R using the ldply() and cast() functions. However, I'd like to be able to do this with some shell scripting.
Any suggestions?
I'm sure this can be done faster/better, but below is simple if long and should work. Only command worth mentioning is cat file1.txt file2.txt file3.txt | awk '{print $1}' | sort | uniq -c | grep "^[ \t]*3" | awk '{print $2}', which globs files together, takes first column, produces counts for number of times each value appears, and stores those that appear 3 times.
#!/bin/bash
trim() {
t="${1##*( )}"
t="${t%%*( )}"
echo "$t"
}
ids=$(cat file1.txt file2.txt file3.txt | awk '{print $1}' | sort | uniq -c | grep "^[ \t]*3" | awk '{print $2}')
for i in $ids; do
line1=''
line2=''
line3=''
for file in file1.txt file2.txt file3.txt; do
while read line; do
index=$(echo $line | awk '{print $1}')
#printf "$index\n"
if [[ $(trim $i) == $(trim $index) ]]; then
if [[ $line1 == '' ]]; then
line1="$line"
elif [[ $line2 == '' ]]; then
line2="$line"
else
line3="$line"
fi
fi
done < "$file"
done
echo "$line1 $line2 $line3" | awk '{print $1 " " $5 " " $9}'
done
e.g.
$ cat file1.txt
12 F2Value1 F3Value2 F4
35 F2Value1 F3Value2 F42
2 F2Value1 F3Value2 F43
523 F2Value1 F3Value2 F44
123 F2Value1 F3Value2 F45
$ cat file2.txt
1 F2Value1 F3Value2
12 F2Value1 F3Value2
123 F2Value1 F3Value2
523 F2Value1 F3Value2
99 F2Value1 F3Value2
$ cat file3.txt
72 F2Value1 F3Value2
12 F2Value1 F3Value2
100 F2Value1 F3Value2
111 F2Value1 F3Value2
123 F2Value1 F3Value2
$ ./script.sh
12 F2Value1 F3Value2 F4 F2Value1 F3Value2 F2Value1 F3Value2
123 F2Value1 F3Value2 F45 F2Value1 F3Value2 F2Value1 F3Value2
Above used echo "$line1 $line2 $line3" | awk '{print $1 " " $2 " " $3 " " $4 " " $6 " " $7 " " $9 " " $10 " " $11}'
You can try:
awk '
{
q[$1]++
a[$1,ARGIND]=$2
}
END {
for (i in q) {
if (q[i]==3) {
print i, a[i,1],a[i,2],a[i,3]
}
}
} ' FileA.txt FileB.txt FileC.txt
Given files: FileA.txt
3 A31 A32 A33
5 A51 A52 A53
9 A91 A92 A93
FileB.txt
2 B21 B22 B23
9 B91 B92 B93
4 B41 B42 B43
5 B51 B52 B53
and FileC.txt
7 C71 C72 C73
9 C91 C92 C93
5 C51 C52 C53
The output is:
5 A51 B51 C51
9 A91 B91 C91
awk '{ getline val2<"file2" # read file "file2" to var "val2" , each time, read one line.
split(val2,a2,FS); # split var2 into array a2
getline val3<"file3" # read file "file3" to var "val3" , each time, read one line.
split(val3,b3,FS) # split var3 into array a3
print $1,$2,a2[2],b3[2]
}' file1

Resources