I am trying to write a script to output lines which fulfill a certain criteria into a new .txt file, trying to combined unix and awk
been googling but keep getting this error:syntax error near unexpected token `done'
Filename="bishan"
file="659.A"
while IFS= read line
do
cat $Filename.txt | awk '{ otherSubNo = substr($0,73,100);gsub(/
/,"",otherSubNo); if(length(otherSubNo)>8){ print "Subscriber Number is
",": ",substr($0,1,20)," Other Subscriber Number is ", " :
",substr($0,73,100) }}'| wc -l >> $Filename.txt
done <"$file"
example of 659.A is as follows:
This is the first line of the 659.a file:
6581264562 201611050021000000002239442239460000000019010000010081866368
00C0525016104677451 100C 0 0000
0111000 000000000000000000006598540021 01010000000000659619778001010000
000000659854000300000000000000000000 004700001
Please help, I have been googling about this but no avail
I was able to reproduce the specified error, albeit only with close approximation, by typing the script in notepad (windows) and testing it in cygwin.
script.sh:
while read myline
do
echo $myline
done
In ksh:
~> /usr/bin/ksh ./script.sh
: not found
./script.sh[7]: syntax error: 'done' unexpected
In bash:
~> /usr/bin/bash ./script.sh
./script.sh: line 2: $'\r': command not found
./script.sh: line 6: syntax error near unexpected token `done'
./script.sh: line 6: `done'
The said error (at least, in my case) is because of the CRLF characters. When I copy-paste the code to cygwin, the CRLF turns to LF (along with all invisible control characters that get lost), thus making the error disappear.
Related
I am having problem to validate json string.
i am using below code
if jq -e . >/dev/null 2>&1 <<<"$json_string"; then
echo "Parsed JSON successfully and got something other than false/null"
else
echo "Failed to parse JSON, or got false/null"
fi
This does not work for json_string={"fruit":{"name":"app. this still shows Parsed JSON successfully and got something other than false/null where as the json string is incomplete.
Apparently it is one of the issues in jq-1.5. Un-terminated objects/arrays, without a corresponding close character, are being treated as valid objects and are accepted by the parser. Can reproduce in jq-1.5, but fixed in jq-1.6
On jq-1.6
jq -e . <<< '{"fruit":{"name":"app'
parse error: Unfinished string at EOF at line 2, column 0
echo $?
4
minimal reproducible example below, which again is handled well in 1.6 but doesn't throw an error in 1.5
jq -e . <<< '{'
parse error: Unfinished JSON term at EOF at line 2, column 0
jq -e . <<< '['
parse error: Unfinished JSON term at EOF at line 2, column 0
Suggest upgrading to jq-1.6 to make this work!
My file has multiple messages in it, each with a time stamp. I need to pull out just one message from a file based on its timestamp. Sometimes a message will have a blank line within the contents of the message. I prefer to do this at the unix prompt on an AIX operating system.
My file (er96aa.example) contains the following information. I want to pull out the second message with a time stamp of 15:56:10.097 (it should be a total of 4 lines of data).
07/05/19 15:56:10.091 SOCKETSND MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
07/05/19 15:56:10.097 SOCKETSND MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
07/05/19 15:56:10.099 SOCKETSND MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I tried
grep -p '15:56:10.097' er96aa.example
but that only returns the first two lines.
I tried
grep -p'07/05/19' '15:56:10.097' er96aa.example
but that returns nothing.
grep -p'07/05/19'+ '15:56:10.097' er96aa.example and
grep -p'07/05/19+' '15:56:10.097' er96aa.example
but that returns the whole file
I modified my file and put 07/05/19 on a separate line and "grep -p'07/05/19' '15:56:10.097' er96aa.example" did work, but unfortunately I don't have the ability to modify the format of the file I am usually working with.
Expected Output:
07/05/19 15:56:10.097 SOCKETSND
MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I don't have access to an AIX box to test this but try:
$ awk '/MESSAGE:/{f=0} /15:56:10.097/{f=1} f' file
07/05/19 15:56:10.097 SOCKETSND MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
How it works
By default, awk reads through a file one line at a time. Our script uses a single variable f to determine if the current line should be printed.
/MESSAGE:/{f=0}
This sets variable f to false (0) if the regex MESSAGE: appears on the current line.
/15:56:10.097/{f=1}
This sets variable f to true (1) is the regex 15:56:10.097 appears on the current line.
f
If f is true, print the line.
Some variation of Johns post.
awk '/^[0-9]{2}\/[0-9]{2}\/[0-9]{2}/{f=0} /^07\/05\/19 15:56:10.097/{f=1} f'
07/05/19 15:56:10.097 SOCKETSND MESSAGE LENGTH=338 MESSAGE:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
This uses exact date and time as a trigger and gets all line to next line starting with date format.
I actually use jq (1.5) with Windows 10 to Format different json files. I tried today to move the filters to a filter file to cut the length of my cmd commands.
I copied the filter directly from the command with all Quotations but i received an Syntax error. I tried to remove the qotations or Change them to ' but i still receive the Syntax error:
jq: error: syntax error, unexpected IDENT, expecting $end (Windows cmd shell quoting issues?) at <top-level>, line 1:
[.cruises[] | { nid: .cruise_nid, shipcategory: .ship_category, ship: .ship_title, company: .company_title, includeflight: .includes_flight, nights, waypoints: .waypoint_cities, title: .route_title}] C:\import\dreamlines_cruises.json > C:\Import\import_cruises.json
Any tips?
Regards Timo
Your jq filter as given (i.e. without quotation marks) looks fine, so let's assume you have successfully placed the text (hopefully formatted for readability :-) in a file, say format.jq
Then you would run something like this:
jq -f format.jq dreamlines_cruises.json
I have been trying all day to find a way to run this line (which works in bash) in R, and I keep getting errors about the round brackets... I understand that the paste command gets confused when dealing with brackets, but I have tried escaping the brackets, putting them in double quotes like this "')'" but nothing works so I am out of resources. Does anybody have any idea how this could work in R?
system(paste("sortBed -i <(awk -v a=1 -v b=2 -v c=3 -v d=4 '{OFS=FS=\"\t\"} {if ($d < 0.5) print \"value\"$a, $b-\"'$d'\", $c+\"'$d'\"}' file.in > file.out", sep=""))
sh: -c: line 0: syntax error near unexpected token `('
The reason seems to be that the R system() command calls the bourne shell (sh) instead of the bourne again shell (bash). For example, the command
> system("paste <(echo 'Hi')")
will fail, mentioning the bourne shell in the process:
sh: -c: line 0: syntax error near unexpected token `('
One solution is to print the command in the bourne shell and pipe the output into bash:
> system("echo \"paste <(echo 'Hi')\" | bash")
Hi
I get the same error as you when running the line from R. As far as I can see there's missing a final parenthesis for the output process substitution in the bash script but adding that doesn't prevent the error. Also the tabulator should be double-escaped to make sure the backslash is passed onto the awk script.
One solution that we found out works in this case is to pipe the output from awk directly into sortBed.
system(paste("awk -v a=1 -v b=2 -v c=3 -v d=4 '{OFS=FS=\"\\t\"} {if ($d < 0.5) print \"value\"$a, $b-\"'$d'\", $c+\"'$d'\"}' file.in | sortBed -i", sep=""))
We didn't really get the output process substitution to work, so if anyone has any suggestions for that it would be nice to hear.
I am writing a simple unix script as follows:
#!/bin/bash
mkdir tmp/temp1
cd tmp/temp1
echo "ab bc cj nn mm" > output.txt
grep 'ab' output.txt > newoutput.txt
I got following error message:
grep : No such file or directory found output.txt
but when I looked into the directory the text is created output.txt...but the type of the file was TXT....I am not sure what it is any help??
You probably have a stray '\r' (carriage return) on the line with the echo command. You're creating a file called "output.txt\r", and then trying to read a file called "output.txt" without the carriage return.
Fix the script so it uses Unix-style line endings (\n rather than \r\n). You can use the unix2dos command for this. (Note that unix2dos, unlike most filters, overwrites its input file.)