List only hex value named files - hex

Using QNX, i'm creating a script that will list only hex valued files under 1F.
/path# ls
. 05 09 0B pubsub09
.. 07 09_sub 0E
04 08 0A 81
/path#
I have code that should list only hex values, but it still lists the whole directory.
ls /path/ |
while read fname
do
if [ "ibase=16; $fname" ]
then
echo "$fname"
fi
done
return 0

Try this instead
if [[ $fname =~ ^[[:xdigit:]]+$ ]]

Related

Is there Zlib for R ? raw inflate function - how to decompress hexadecimal values

I need to decompress hex values and convert those to string.
Actual problem is that i'm not able to figure out how to decompress hex values
Hex do not contain any headers,
If i copy hex codes to CyberChef i'm able to decompress those and have original string
In CyberChef only Raw Inflate operation is needed
So i'm hoping help how to do raw inflate in R
I have tried memDecompress using all options without success (i.e gzip etc)
UPDATE:
Here is a sample from hex:
e3 0e 71 0d 0e f1 54 c8 cb 2f 52 30 02 00
which i'm able to convert using CyberChef to string
".TESTI nor 2"
RLdata<- sqlQuery(connection, ..... AS Varbinary(max) AS NOTEShort ......
> RLdata$NOTEshort[4268]
[[1]]
[1] e3 0e 71 0d 0e f1 54 c8 cb 2f 52 30 02 00
> unlist(RLdata$NOTEshort[4268])
[1] e3 0e 71 0d 0e f1 54 c8 cb 2f 52 30 02 00
> memDecompress(unlist(RLdata$NOTEshort[4268]),type = "gzip", asChar = TRUE)
Error in memDecompress(unlist(RLdata$NOTEshort[4268]), type = "gzip", :
internal error -3 in memDecompress(2)
> memDecompress(unlist(RLdata$NOTEshort[4268]),type = "unknown", asChar = TRUE)
[1] "ã\016q\r\016ñTÈË/R0\002"
Warning message:
In memDecompress(unlist(RLdata$NOTEshort[4268]), type = "unknown", :
unknown compression, assuming none
If you convert it into Base64 and then decode it back to Hex I think it decompresses to original, but may have been changed by a bug fix. It used to do this a couple of years back but I haven't used CyberChef in a while, sorry
Had to do this using python3. Zlib.decompress() did the trick.
Link to python solution
Read Dynamics NAV Table Metadata with SQL

Drop 4 first columns

I have a command that can drop first 4 columns, but unfortunately if 2nd column name and 4th column name likely similar, it will truncate at 2nd column but if 2nd column and 4th column name are not same it will truncate at 4th column. Is it anything wrong to my commands?
**
awk -F"|" 'NR==1 {h=substr($0, index($0,$5)); next}
{file= path ""$1""$2"_"$3"_"$4"_03042017.csv"; print (a[file]++?"": "DETAILS 03042017" ORS h ORS) substr($0, index($0,$5)) > file}
END{for(file in a) print "EOF " a[file] > file}' filename
**
Input:
Account Num | Name | Card_Holder_Premium | Card_Holder| Type_Card | Balance | Date_Register
01 | 02 | 03 | 04 | 05 | 06 | 07
Output
_Premium | Card_Holder| Type_Card | Balance | Date_Register
04 | 05 | 06 | 07
My desired output:
Card_Holder| Type_Card | Balance | Date_Register
05 | 06 |07
Is this all you're trying to do?
$ sed -E 's/([^|]+\| ){4}//' file
April | May | June
05 | 06 | 07
$ awk '{sub(/([^|]+\| ){4}/,"")}1' file
April | May | June
05 | 06 | 07
The method you use to remove columns using index is not correct. As you have figured out, index can be confused and match the previous field when the previous field contains the same words as the next field.
The correct way is the one advised by Ed Morton.
In this online test, bellow code based on Ed Morton suggestion, gives you the output you expect:
awk -F"|" 'NR==1 {sub(/([^|]+\|){3}/,"");h=$0;next} \
{file=$1$2"_"$3"_"$4"_03042017.csv"; sub(/([^|]+\|){3}/,""); \
print (a[file]++?"": "DETAILS 03042017" ORS h ORS) $0 > file} \
END{for(file in a) print "EOF " a[file] > file}' file1.csv
#Output
DETAILS 03042017
Card_Holder| Type_Card | Balance | Date_Register
04 | 05 | 06 | 07
EOF 1
Due to the whitespace that you have include in your fields, the filename of the generated file appears as 01 02 _ 03 _ 04 _03042017.csv. With your real data this filename should appear correct.
In any case, i just adapt Ed Morton answer to your code. If you are happy with this solution you should accept Ed Morton answer.
PS: I just removed a space from Ed Morton answer since it seems to work a bit better with your not so clear data.
Ed Suggested:
awk '{sub(/([^|]+\| ){4}/,"")}1' file
#Mind this space ^
This space here it might fail to catch your data if there is no space after each field (i.e April|May).
On the other hand, by removing this space it seems that Ed Solution can correctly match either fields in format April | May or in format April|May

How to assign variables in a csh script and used them as arguments for that same script?

Good day,
Hoping for the kind help of anyone here, thanks in advance.
I have T.csh which looks like this:
#! /bin/csh
set a="01 02 03 04 05 06 07 08 09 10 11 12 13"
set b="14 15 16 17 18 19 20 21 22 23 24 25"
set c="01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25"
set X = `grep $1 EOL.txt | head -n1 | cut -d- -f1`
printf "$X\n$2\n$3\nYYYY\n1\nN\n"
The variables a,b and c are optionally used as the 3rd argument in the printf line. The problem is, whenever I try to run the script, it showed undefined variable. These set command lines are working whenever I assigned them interactively, but inside the script, it seems to not work. Perhaps I need to initialize it but could not figure out how. Just new to this programming thing, I hope someone here can help me. Thanks a lot in advance.
Here are the sample execution and error for your reference:
CAT-46{bc2}40>set a="01 02 03 04 05 06 07 08 09 10 11 12 13"
CAT-46{bc2}41>./T.csh 4773 XXXX.XX "$a"
62
XXXX.XX
01 02 03 04 05 06 07 08 09 10 11 12 13
82869
1
N
CAT-46{bc2}42>unset a
CAT-46{bc2}43>./T.csh 4773 XXXX.XX "$a"
a: Undefined variable
CAT-46{bc2}44>
If i set the variables manually,it's OK, but when I called for it from the script, its flagging undefined variable error.
Mike
I post another answer because a comment is too short. Look at the following.
I have a script named /tmp/T.csh:
#!/bin/csh
set a="blah"
echo $a
My shell is bash; I type /tmp/T.csh: result is blah (csh executed the script).
Still in bash; I type unset a; /tmp/T.csh $a: result is the same.
Still in bash; I type . /tmp/T.csh: no result (bash executed the script).
I type csh; now I am in csh.
I type /tmp/T.csh: result is blah (of course).
I type /tmp/T.csh $a: "a: Undefined variable"
set a = something
/tmp/T.csh $a: blah
echo $a: something
unset a
echo $a: "a: Undefined variable"
I replicated all you did; hope this helps.
You get an error for what you wrote on the command line, not for the content of your script. Even a simple echo, as you can see here above, gives an error if you on the command line refer to a variable which does not exist.
prompt> unset a
prompt> ./T.csh 4773 XXXX.XX "$a"
The first command, "unset a", deletes the variable. In the second command you try to read the variable (on the command line!). That is why csh complains.

Unix grep line above and concatenate

here is my sample input from a log file.
#2014 03 06 11:21:44:028#+1300#
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[UserID= testUser]
What I am trying to do is go through all the log entries and do a grep command on the "UserID=" and then get the line 2 lines above (the timestamp). I then wish my output file to be a concatenation of the two into the file tempLog.txt
#2014 03 06 11:21:44:028#+1300# [UserID= testUser]
Can anyone help me with this? Still kinda new to Unix.... :)
Thanks
Chris
UPDATED DUMMY DATA
#2.#2014 03 06 11:21:29:163#+1300#Info#/System/Security/Audit/Logon#
#xxxxxx (Has white spaces)
Logon failed | LOGIN.ERROR | null | | Login Method=[default], IP Address=[xx.xx.xxxx], UserID=[testUser], Reason=[Authentication did not succeed.]#
give this line a try:
grep --group-separator="" -B2 'UserID=' file|awk -v RS="" -F '\n' '{$2=""}7'
test:
kent$ cat f
fooba
#2014 03 06 11:21:44:028#+1300#
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[UserID= testUser]
foo
bar
#2014 03 06 11:21:44:028#+1400#
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
[UserID= testUser2]
kent$ grep --group-separator="" -B2 'UserID=' f|awk -v RS="" -F '\n' '{$2=""}7'
#2014 03 06 11:21:44:028#+1300# [UserID= testUser]
#2014 03 06 11:21:44:028#+1400# [UserID= testUser2]
This awk should do:
awk '/#20/ {f=$0} /\[UserID/ {print f,$0}' file
#2014 03 06 11:21:44:028#+1300# [UserID= testUser]

extract a string after a pattern

I want to extract the numbers following client_id and id and pair up client_id and id in each line.
For example, for the following lines of log,
User(client_id:03)) results:[RelatedUser(id:204, weight:10),_RelatedUser(id:491,_weight:10),_RelatedUser(id:29, weight: 20)
User(client_id:04)) results:[RelatedUser(id:209, weight:10),_RelatedUser(id:301,_weight:10)
User(client_id:05)) results:[RelatedUser(id:20, weight: 10)
I want to output
03 204
03 491
03 29
04 209
04 301
05 20
I know I need to use sed or awk. But I do not know exactly how.
Thanks
This may work for you:
awk -F "[):,]" '{ for (i=2; i<=NF; i++) if ($i ~ /id/) print $2, $(i+1) }' file
Results:
03 204
03 491
03 29
04 209
04 301
05 20
Here's a awk script that works (I put it on multiple lines and made it a bit more verbose so you can see what's going on):
#!/bin/bash
awk 'BEGIN{FS="[\(\):,]"}
/client_id/ {
cid="no_client_id"
for (i=1; i<NF; i++) {
if ($i == "client_id") {
cid = $(i+1)
} else if ($i == "id") {
id = $(i+1);
print cid OFS id;
}
}
}' input_file_name
Output:
03 204
03 491
03 29
04 209
04 301
05 20
Explanation:
awk 'BEGIN{FS="[\(\):,]"}: invoke awk, use ( ) : and , as delimiters to separate your fields
/client_id/ {: Only do the following for the lines that contain client_id:
for (i=1; i<NF; i++) {: iterate through the fields on each line one field at a time
if ($i == "client_id") { cid = $(i+1) }: if the field we are currently on is client_id, then its value is the next field in order.
else if ($i == "id") { id = $(i+1); print cid OFS id;}: otherwise if the field we are currently on is id, then print the client_id : id pair onto stdout
input_file_name: supply the name of your input file as first argument to the awk script.
This might work for you (GNU sed):
sed -r '/.*(\(client_id:([0-9]+))[^(]*\(id:([0-9]+)/!d;s//\2 \3\n\1/;P;D' file
/.*(\(client_id:([0-9]+))[^(]*\(id:([0-9]+)/!d if the line doesn't have the intended strings delete it.
s//\2 \3\n\1/ re-arrange the line by copying the client_id and moving the first id ahead thus reducing the line for successive iterations.
P print upto the introduced newline.
D delete upto the introduced newline.
I would prefer awk for this, but if you were wondering how to do this with sed, here's one way that works with GNU sed.
parse.sed
/client_id/ {
:a
s/(client_id:([0-9]+))[^(]+\(id:([0-9]+)([^\n]+)(.*)/\1 \4\5\n\2 \3/
ta
s/^[^\n]+\n//
}
Run it like this:
sed -rf parse.sed infile
Or as a one-liner:
<infile sed '/client_id/ { :a; s/(client_id:([0-9]+))[^(]+\(id:([0-9]+)([^\n]+)(.*)/\1 \4\5\n\2 \3/; ta; s/^[^\n]+\n//; }'
Output:
03 204
03 491
03 29
04 209
04 301
05 20
Explanation:
The idea is to repeatedly match client_id:([0-9]+) and id:([0-9]+) pairs and put them at the end of pattern space. On each pass the id:([0-9]+) is removed.
The final replace removes left-overs from the loop.

Resources