Getting output from shell command in json format - jq

I have a command on printers status, that i want to get in json format
here is a command and output:
qchk -AW
Queue Dev Status Job Files User PP % Blks Cp Rnk
-------------------- -------------- --------- ------ ------------------ ---------- ---- --- ----- --- ---
PRTS0910 hp#PRTS0910 DOWN
QUEUED 10872 test.ksh root 1 1 1
QUEUED 10873 test.ksh root 1 1 2
PRTS09102 hp#PRTS09102 READY
PRTS0913 hp#PRTS0913 READY
PRTC1401 hp#PRTC1401 READY
PRTS1018 hp#PRTS1018 READY
PRTS0TESTQ PRTS0TEST DOWN
QUEUED 10871 print.ksh root 1 1 1
I have got a script on kshell with jq using:
#!/usr/bin/ksh
qchk -AW | tr -s ' ' | jq -sR 'split("\n") |
.[2:-1] |
map(split(" ")) |
map({"Printer": .[0],
"device": .[1],
"state": .[2],
"job": .[3],
"something": .[4],
"somth2": .[5]})'
The problem is, that i think, I do not understand jq working principles, and can not understand how jq could process delails( jobs) rows - under printer rows, so this script gives me all json like structure in one row like :
{
"Printer": "PRTS0910",
"device": "hp#PRTS0910",
"state": "DOWN",
"job": "",
"something": null,
"somth2": null
},
{
"Printer": "",
"device": "QUEUED",
"state": "10872",
"job": "test.ksh",
"something": "root",
"somth2": "1"
},
{
"Printer": "",
"device": "QUEUED",
"state": "10873",
"job": "test.ksh",
"something": "root",
"somth2": "1"
},
{
"Printer": "PRTS09102",
"device": "hp#PRTS09102",
"state": "READY",
"job": "",
"something": null,
"somth2": null
},
but i need add somwhere in my code lines to filter job lines under printers with something like this (to process lines -"QUEUED 10871 print.ksh root 1 1 1") :
map(split(" ")) |
map({"Staus": .[0],
"Job number": .[1],
"file": .[2],
"user": .[3],
"something3": .[4],
"somth4": .[5]})'
to get master detail json object
Any help
Here is a sample output from qchk, sorry am not very expierenced in SO editing:
# qchk -AW
Queue Dev Status Job Files User PP % Blks Cp Rnk
PRTS0408 hp#PRTS0408 READY
PRTS0416 hp#PRTS0416 READY
PRTS0417 hp#PRTS0417 READY
PRTS0702 hp#PRTS0702 RUNNING 110816 /alliance/PRINT/PR root 0 100 140 1 1
QUEUED 110848 /alliance/PRINT/PR root 141 1 2
QUEUED 110849 /alliance/PRINT/PR root 141 1 3
QUEUED 110850 /alliance/PRINT/PR root 141 1 4
QUEUED 110856 /alliance/PRINT/PR root 141 1 5
QUEUED 110857 /alliance/PRINT/PR root 141 1 6
QUEUED 110858 /alliance/PRINT/PR root 141 1 7
QUEUED 110859 /alliance/PRINT/PR root 141 1 8
QUEUED 110860 /alliance/PRINT/PR root 141 1 9
QUEUED 110861 /alliance/PRINT/PR root 140 1 10
QUEUED 110862 /alliance/PRINT/PR root 140 1 11
QUEUED 110884 /alliance/PRINT/PR root 141 1 12
QUEUED 110885 /alliance/PRINT/PR root 141 1 13
QUEUED 110886 /alliance/PRINT/PR root 141 1 14
PRTS0714 hp#PRTS0714 RUNNING 110984 /alliance/PRINT/PR root 0 100 11 1 1
PRTS0723 hp#PRTS0723 READY
PRTS0901 hp#PRTS0901 READY
PRTS0906 hp#PRTS0906 READY
PRTS0907 hp#PRTS0907 READY
PRTS0909 hp#PRTS0909 READY
PRTS0910 hp#PRTS0910 READY
PRTS09102 hp#PRTS09102 RUNNING 111017 /alliance/PRINT/PR root 0 100 141 1 1
QUEUED 111018 /alliance/PRINT/PR root 140 1 2
QUEUED 111019 /alliance/PRINT/PR root 140 1 3
In two words row is the printers, and they are owners of their print jobs, and I need to somehow get this relation in json.

Related

"Out of memory" in hibrid MPI/OpenMP for GPU acceleration

I have compiled Quantum ESPRESSO (Program PWSCF v.6.7MaX) for GPU acceleration (hibrid MPI/OpenMP) with the next options:
module load compiler/intel/2020.1
module load hpc_sdk/20.9
./configure F90=pgf90 CC=pgcc MPIF90=mpif90 --with-cuda=yes --enable-cuda-env-check=no --with-cuda-runtime=11.0 --with-cuda-cc=70 --enable-openmp BLAS_LIBS='-lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core'
make -j8 pw
Apparently, the compilation ends succesfully. Then, I execute the program:
export OMP_NUM_THREADS=1
mpirun -n 2 /home/my_user/q-e-gpu-qe-gpu-6.7/bin/pw.x < silverslab32.in > silver4.out
Then, the program starts running and print out the next info:
Parallel version (MPI & OpenMP), running on 8 processor cores
Number of MPI processes: 2
Threads/MPI process: 4
...
GPU acceleration is ACTIVE
...
Estimated max dynamical RAM per process > 13.87 GB
Estimated total dynamical RAM > 27.75 GB
But after 2 minutes of execution the job ends with error:
0: ALLOCATE: 4345479360 bytes requested; status = 2(out of memory)
0: ALLOCATE: 4345482096 bytes requested; status = 2(out of memory)
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[47946,1],1]
Exit code: 127
--------------------------------------------------------------------------
This node has > 180GB of available RAM. I check the Memory use with the top command:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
89681 my_user 20 0 30.1g 3.6g 2.1g R 100.0 1.9 1:39.45 pw.x
89682 my_user 20 0 29.8g 3.2g 2.0g R 100.0 1.7 1:39.30 pw.x
I noticed that the process stops when RES memory reaches 4GB. This are the caracteristics of the node:
(base) [my_user#gpu001]$ numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 28 29 30 31 32 33 34 35 36 37 38 39 40 41
node 0 size: 95313 MB
node 0 free: 41972 MB
node 1 cpus: 14 15 16 17 18 19 20 21 22 23 24 25 26 27 42 43 44 45 46 47 48 49 50 51 52 53 54 55
node 1 size: 96746 MB
node 1 free: 70751 MB
node distances:
node 0 1
0: 10 21
1: 21 10
(base) [my_user#gpu001]$ free -lm
total used free shared buff/cache available
Mem: 192059 2561 112716 260 76781 188505
Low: 192059 79342 112716
High: 0 0 0
Swap: 8191 0 8191
The version of MPI is:
mpirun (Open MPI) 3.1.5
This node is a compute node in a cluster, but no matter if I submit the job with SLURM or run it directly on the node, the error is the same.
Note that I compile it on the login node and run it on this GPU node, the difference is that on the login node it has no GPU connected.
I would really appreciate it if you could help me figure out what could be going on.
Thank you in advance!

What happens when you hit ctrl+z on a process?

If I am running a long-running process, and when I stop it with Ctrl+Z, I get the following message in my terminal:
76381 suspended git clone git#bitbucket.org:kevinburke/<large-repo>.git
What actually happens when the process is suspended? Is the state held in memory? Is this functionality implemented at the operating system level? How is the process able to resume execution right where it left off when I restart it with fg?
When you hit Ctrl+Z in a terminal, the line-discipline of the (pseudo-)terminal device driver (the kernel) sends a SIGTSTP signal to all the processes in the foreground process group of the terminal device.
That process group is an attribute of the terminal device. Typically, your shell is the process that defines which process group is the foreground process group of the terminal device.
In shell terminology, a process group is called a "job", and you can put a job in foreground and background with the fg and bg command and find out about the currently running jobs with the jobs command.
The SIGTSTP signal is like the SIGSTOP signal except that contrary to SIGSTOP, SIGTSTP can be handled by a process.
Upon reception of such a signal, the process is suspended. That is, it's paused and still there, only it won't be scheduled for running any more until it's killed or sent a SIGCONT signal to resume execution. The shell that started the job will be waiting for the leader of the process group in it. If it is suspended, the wait() will return indicating that the process was suspended. The shell can then update the state of the job and tell you it is suspended.
$ sleep 100 | sleep 200 & # start job in background: two sleep processes
[1] 18657 18658
$ ps -lj # note the PGID
F S UID PID PPID PGID SID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 10031 18657 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 S 10031 18658 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 R 10031 18692 26500 18692 26500 0 80 0 - 2964 - pts/2 00:00:00 ps
0 S 10031 26500 26498 26500 26500 0 80 0 - 10775 - pts/2 00:00:01 zsh
$ jobs -p
[1] + 18657 running sleep 100 |
running sleep 200
$ fg
[1] + running sleep 100 | sleep 200
^Z
zsh: suspended sleep 100 | sleep 200
$ jobs -p
[1] + 18657 suspended sleep 100 |
suspended sleep 200
$ ps -lj # note the "T" under the S column
F S UID PID PPID PGID SID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 T 10031 18657 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 T 10031 18658 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 R 10031 18766 26500 18766 26500 0 80 0 - 2964 - pts/2 00:00:00 ps
0 S 10031 26500 26498 26500 26500 0 80 0 - 10775 - pts/2 00:00:01 zsh
$ bg %1
[1] + continued sleep 100 | sleep 200
$ ps -lj
F S UID PID PPID PGID SID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 10031 18657 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 S 10031 18658 26500 18657 26500 0 85 5 - 2256 - pts/2 00:00:00 sleep
0 R 10031 18824 26500 18824 26500 0 80 0 - 2964 - pts/2 00:00:00 ps
0 S 10031 26500 26498 26500 26500 0 80 0 - 10775 - pts/2 00:00:01 zsh

UNIX Full Outer Join creates duplicate entries items despite correct order? Could it be the unpaired items creating disorder?

I have two files that I want to join based on their first column.
They are sorted, and not all of the values in the first column in FILE1 are in FILE2, and viceversa.
FILE1.TXT looks something like this, except it is around 15k lines:
snRNA:7SK 1037
snRNA:U11 144
snRNA:U1:21D 348.293
snRNA:U12:73B 16
snRNA:U1:82Eb 2.14286
snRNA:U1:95Ca 348.293
snRNA:U1:95Cb 351.96
snRNA:U1:95Cc 35.5095
snRNA:U2:14B 447.35
snRNA:U2:34ABa 459.75
snRNA:U2:34ABb 513.25
snRNA:U2:34ABc 509
snRNA:U2:38ABa 443.65
snRNA:U4:38AB 155
snRNA:U4:39B 611.833
snRNA:U4atac:82E 152.5
snRNA:U5:14B 1
snRNA:U5:23D 2.5
snRNA:U5:34A 11
snRNA:U5:38ABb 2.5
snRNA:U5:63BC 44
snRNA:U6:96Aa 18
snRNA:U6:96Ab 9.5
snRNA:U6:96Ac 8.5
snRNA:U7 4
snRNA:U8 8
FILE2.TXT looks like this, it is also ~15K lines:
snRNA:7SK 1259
snRNA:U11 33
snRNA:U1:21D 1480.57
snRNA:U12:73B 4
snRNA:U1:82Eb 10.2
snRNA:U1:95Ca 1480.57
snRNA:U1:95Cb 1484.03
snRNA:U1:95Cc 114.633
snRNA:U2:14B 4678.89
snRNA:U2:34ABa 4789.93
snRNA:U2:34ABb 5292.22
snRNA:U2:34ABc 5273.23
snRNA:U2:38ABa 4557.88
snRNA:U2:38ABb 3.75
snRNA:U4:38AB 405
snRNA:U4:39B 1503.5
snRNA:U4atac:82E 548
snRNA:U5:14B 25
snRNA:U5:23D 19
snRNA:U5:34A 32
snRNA:U5:38ABb 4
snRNA:U5:63BC 742
snRNA:U6:96Aa 39.5
snRNA:U6:96Ab 1
snRNA:U6:96Ac 1
snRNA:U7 11
As you can see, an element from FILE2 (snRNA:U5:38ABb) is missing IN FILE1, and an element from FILE1 is missing in FILE2. This is the case all through out the files, in both directions and multiple times.
I am writing the command as follows:
join -a1 -a2 -e "0" -1 1 -2 1 -o '0,1.2,2.2' -t ' '
FILE1.TXT FILE2.TXT
>JOIN_FILE.TXT
If I try the command with ONLY the 20 or so lines that I pasted from each file, it works as it should.
But when I run it on the entire files, The output is terrible, and I don't understand why. Both files were sorted using sort -k1,1, so even though some lines in 1 are not in 2, and viceversa, they are both in the same order.
What I get is duplicate entries for an item, such as: (again, I'm only showing a fraction of the output file...)
snRNA:7SK 0 1037
snRNA:U11 0 144
snRNA:U1:21D 0 348.293
snRNA:U12:73B 0 16
snRNA:U1:82Eb 0 2.14286
snRNA:U1:95Ca 0 348.293
snRNA:U1:95Cb 0 351.96
snRNA:U1:95Cc 0 35.5095
snRNA:U2:14B 0 447.35
snRNA:U2:34ABa 0 459.75
snRNA:U2:34ABb 0 513.25
snRNA:U2:34ABc 0 509
snRNA:U2:38ABa 0 443.65
snRNA:U4:38AB 0 155
snRNA:U4:39B 0 611.833
snRNA:U4atac:82E 0 152.5
snRNA:U5:14B 0 1
snRNA:U5:23D 0 2.5
snRNA:U5:34A 0 11
snRNA:U5:38ABb 0 2.5
snRNA:U5:63BC 0 44
snRNA:U6:96Aa 0 18
snRNA:U6:96Ab 0 9.5
snRNA:U6:96Ac 0 8.5
snRNA:U7 0 4
snRNA:7SK 1259 0
snRNA:U11 33 0
snRNA:U1:21D 1480.57 0
snRNA:U12:73B 4 0
snRNA:U1:82Eb 10.2 0
snRNA:U1:95Ca 1480.57 0
snRNA:U1:95Cb 1484.03 0
snRNA:U1:95Cc 114.633 0
snRNA:U2:14B 4678.89 0
snRNA:U2:34ABa 4789.93 0
snRNA:U2:34ABb 5292.22 0
snRNA:U2:34ABc 5273.23 0
snRNA:U2:38ABa 4557.88 0
snRNA:U2:38ABb 3.75 0
snRNA:U4:38AB 405 0
snRNA:U4:39B 1503.5 0
snRNA:U4atac:82E 548 0
snRNA:U5:14B 25 0
snRNA:U5:23D 19 0
snRNA:U5:34A 32 0
snRNA:U5:38ABb 4 0
snRNA:U5:63BC 742 0
snRNA:U6:96Aa 39.5 0
snRNA:U6:96Ab 1 0
snRNA:U6:96Ac 1 0
snRNA:U7 11 0
Where esentially everything has been duplicated, with one line for the value in FILE1 and another line for the value in FILE2. Could this be because of the accumulated differences between the files (i.e., all the non-paired entries before these specific ones?)
This scrambling of the output runs all throughout the file.
What am I doing wrong? Am I not specifying that entries in both files don't always match?
Is there any way to solve this?
Thanks a lot!
Carmen
Edit:
Here are the first 15 lines of each file, in order to show that the order is the same in both, but things start to get different because items in FILE1 start to appear that are not in FILE2, and viceversa. I wonder if this is what causes the mix-up.
==> FILE1 <==
128up 139
140up 170
14-3-3epsilon 4488
14-3-3zeta 24900
18w 885
26-29-p 517
2mit 3085.34
312 64
4EHP 9012.57
5.8SrRNA:CR40454 16.5
5-HT1A 1867
5-HT1B 366
5-HT2 2611.27
5-HT7 1641.67
5PtaseI 462
==> FILE2 <==
128up 80
140up 19
14-3-3epsilon 1718
14-3-3zeta 5554
18w 213
26-29-p 200
2mit 680.786
312 33
4EHP 1838.44
5-HT1A 303
5-HT1B 42
5-HT2 553.65
5-HT7 348.5
5PtaseI 105
5S_DM 46054.4
It is possible that you have "spaces" instead of "tabs" in one of your files.
Your join command seems to give duplicated entries when there is a space in one of the line:
#> bash fjoin.sh
:: join ::
join: s.file1s.txt:2: is not sorted: 128up 139
:: diff ::
1c1,3
< 128up 139 80
---
> 128up 0 80
> 128up 139 0 0
> 128up 139 0
#> grep " " file*txt
file1s.txt:128up 139
#> grep 128up file1s.txt
128up 139
128up 139
fjoin.sh
#!/bin/bash
f1="file1.txt"
f1s="file1s.txt"
f2="file2.txt"
# sort files & remove duplicate
sort -k 1b,1 ${f1} | uniq > s.${f1}
sort -k 1b,1 ${f1s} | uniq > s.${f1s}
sort -k 1b,1 ${f2} | uniq > s.${f2}
echo ":: join ::"
join -a1 -a2 -e "0" -1 1 -2 1 -o '0,1.2,2.2' -t ' ' s.${f1} s.${f2} > joined-1_f1_f2.txt
join -a1 -a2 -e "0" -1 1 -2 1 -o '0,1.2,2.2' -t ' ' s.${f1s} s.${f2} > joined-2_f1_f2.txt
echo " "
echo ":: diff ::"
diff joined-1_f1_f2.txt joined-2_f1_f2.txt
update
Setting LC_ALL=C as Pierre suggested could help.
There are less differences after adding export LC_ALL=C to fjoin.sh :
#> bash fjoin.sh
:: join ::
:: diff ::
1a2
> 128up 139 0 0

Dynamically change what is being awked

I have this input below:
IDNO H1 H2 H3 HT Q1 Q2 Q3 Q4 Q5 M1 M2 EXAM
OUT OF 100 100 150 350 30 30 30 30 30 100 150 400
1434 22 95 135 252 15 20 12 18 14 45 121 245
1546 99 102 140 341 15 17 14 15 23 91 150 325
2352 93 93 145 331 14 17 23 14 10 81 101 260
(et cetera)
H1 H2 H3 HT Q1 Q2 Q3 Q4 Q5 M1 M2 EXAM
OUT OF 100 100 150 350 30 30 30 30 30 100 150 400
I need to use write a unix script to use the awk function to dynamically find any column that is entered in and have it displayed to the screen. I have successfully awked specific columns, but I cant seem to figure out how to make it change based on different columns. My instructor will simply pick a column for test data and my program needs to find that column.
what I was trying was something like:
#!/bin/sh
awk {'print $(I dont know what goes here)'} testdata.txt
EDIT: Sorry i should have been more specific, he is entering in the header name as the input. for example "H3". Then it needs to awk that.
I think you are just looking for:
#!/bin/sh
awk 'NR==1{ for( i = 1; i <= NF; i++ ) if( $i == header ) col=i }
{ print $col }' header=${1?No header entered} testdata.txt
This makes no attempt to deal with a column header that does not appear
in the input. (Left as an exercise for the reader.)
Well, you question is quite diffuse and in principle you want someone else write your awk script... You should check man pages for awk, they are quite descriptive.
my 2 cent wort, as an example would be (for the row) :
myscript.sh:
#/bin/sh
cat $1 | awk -v a=$2 -v b=$3 '{if ($(a)==b){print $0}}';
if you just want a column, well,
#/bin/sh
cat $1 | awk -v a=$2 '{print $(a)}';
Your input would be :
myscript.sh file_of_data col_num
Again, reiterating: Please study man pages of awk. Also please when asking question present what you have tried (code) and errors (logs). This will make people more ready to help you.
Your line format has a lot of variation (in the number of fields). That said, what about something like this:
echo "Which column name?"
read column
case $column in
(H1) N=2;;
(H2) N=3;;
(H3) N=4;;
...
(*) echo "Please try again"; exit 1;;
esac
awk -v N=$N '{print $N}' testdata.txt

cursor loop and continue statement : unexpected behaviour

I might be overlooking something due to deadline stress. But this behaviour amazes me.
It looks as if the cursor caches 100 rows and the continue statement flushes the cache
and begins with the first record of a new cache fetch.
I narrowed it down to the following script:
drop table test1;
create table test1 (test1_id NUMBER);
begin
for i in 1..300
loop
insert into test1 values (i);
end loop;
end;
/
declare
cursor c_test1 is
select *
from test1;
begin
for c in c_test1
loop
if mod(c.test1_id,10) = 0
then
dbms_output.put_line(c_test1%ROWCOUNT||' '||c.test1_id||' Continue');
continue;
end if;
dbms_output.put_line(c_test1%ROWCOUNT||' '||c.test1_id||' Process');
end loop;
end;
/
1 1 Process
2 2 Process
3 3 Process
4 4 Process
5 5 Process
6 6 Process
7 7 Process
8 8 Process
9 9 Process
10 10 Continue **Where are tes1_id's 11 to 100?**
11 101 Process
12 102 Process
13 103 Process
14 104 Process
15 105 Process
16 106 Process
17 107 Process
18 108 Process
19 109 Process
20 110 Continue **Where are tes1_id's 111 to 200?**
21 201 Process
22 202 Process
23 203 Process
24 204 Process
25 205 Process
26 206 Process
27 207 Process
28 208 Process
29 209 Process
30 210 Continue **Where are tes1_id's 211 to 300?**
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
redhat release 5
2 node RAC
It's a bug: 7306422
Pawel Barut wrote:
http://pbarut.blogspot.com/2009/04/caution-for-loop-and-continue-in-oracle.html
Workaround :
SQL> ALTER SESSION SET PLSQL_OPTIMIZE_LEVEL = 1;
Regards,
Rob

Resources