Get variables from HTTP and processing it with AppleScript - http

I get this result from a website:
Value2: 16
Value4: 34
It is possible to have multiple lines or just one. The value is always separated with a ":". The values should be used in AppleScript like this:
set Value2 to 16
set value4 to 34
...
This is what I have so far:
set someText to do shell script "curl http://asdress | textutil -stdin -stdout -format html -convert txt -encoding UTF-8"
set AppleScript's text item delimiters to {":"}
set delimitedList to every text item of someText
How can I set the variables individually?
Thanks for your help!

If you simulate the output of your curl command with echo like this:
echo -e "Value2: 16\n Value4: 34"
echo -e "Value2: 16 Value4: 34"
you can test out some filtering with grep. I would go for the following:
echo -e "Value2: 16\n Value4: 34" | grep -Ewo "\d+"
which will use Extended Regular Expressions (-E) which allows me to look for one or more digits with \d+, and the -w says what I find must have a word boundary either side (so that I don't find the Value2 or Value4, and the -o says to only output the part that matches.
So, to answer your question, I would change your curl to
curl .... | grep -Ewo "\d+"
and you will just get the output
16
34

You don't. Either use a map/hash/dictionary/associative list/table structure to store arbitrary key-value pairs, or use if theKey == "Value1" then \ set Value1 to theValue \ else if theKey == "Value2" then ... etc. if you have a limited number of known keys and really must use separate variables for some rare reason.
Frustratingly, AppleScript doesn't include a key-value data type - it only has arrays (lists) and structs (records) - but you can easily roll your own quick-n-dirty associative list routines for looping over a list of {theKey:"...",theValue:"..."} records (naive performance should be fine up to several dozen items; larger sets'll require more efficient algorithims). Or, if you're on 10.10+ and don't mind getting Cocoa in your AppleScript, you might consider using NSDictionary, which isn't totally ideal either but scales efficiently and saves you the hassle of writing your own code.

Related

How to encrypt every name in a list ZSH scripting using a for loop

I'm new to zsh scripting and I was wondering if it's possible to use the sha256sum function to encrypt every value in a list.
Here is what I have tried so far:
#!/bin/zsh
filenames=`cat filenames.txt`
output='shaNames.txt'
for name in $filenames
do
echo -n $name | sha256sum >> $output
done
What I'm trying to accomplish is to encrypt every name in the list and append it to a new text file.
Any suggestions on what am I doing wrong are appreciated.
You are assigning the output of cat filenames.txt to a multiline variable. The for loop will then only loop once over the content.
What you want to do instead is e.g.:
for name in $(cat filenames.txt)
do
echo -n "$name" | sha256sum >> "$output"
done
Note that while you can still use them, backticks are deprecated in favor of $(somecommand).
Also note that you should always put variables in double quotes, as they could contain spaces.
Your method would fail anyways if one line of your textfile would contain a space.
You could use the following instead:
while read name
do
echo -n "$name" | sha256sum >> "$output"
done < filenames.txt
To anyone who might need the same. What I was doing wrong was assigning the values in the file to a single string variable instead of a list.
To correct that one must use:
filenames=(`cat filenames.txt`)
The parenthesis indicates that a list or array is stored in the filenames variable.

Passing variables to grep command in Tcl Script

I'm facing a problem while trying to pass a variable value to a grep command.
In essence, I want to grep out the lines which match my pattern and the pattern is stored in a variable. I take in the input from the user, and parse through myfile and see if the pattern exists(no problem here).
If it exists I want to display the lines which have the pattern i.e grep it out.
My code:
if {$a==1} {
puts "serial number exists"
exec grep $sn myfile } else {
puts "serial number does not exist"}
My input: SN02
My result when I run grep in Shell terminal( grep "SN02" myfile):
serial number exists
SN02 xyz rtw 345
SN02 gfs rew 786
My result when I try to execute grep in Tcl script:
serial number exists
The lines which match the pattern are not displayed.
Your (horrible IMO) indentation is not actually the problem. The problem is that exec does not automatically print the output of the exec'ed command*.
You want puts [exec grep $sn myfile]
This is because the exec command is designed to allow the output to be captured in a variable (like set output [exec some command])
* in an interactive tclsh session, as a convenience, the result of commands is printed. Not so in a non-interactive script.
To follow up on the "horrible" comment, your original code has no visual cues about where the "true" block ends and where the "else" block begins. Due to Tcl's word-oriented nature, it pretty well mandates the one true brace style indentation style.

Field spearator to used if they are not escaped using awk

i have once question, suppose i am using "=" as fiels seperator, in this case if my string contain for example
abc=def\=jkl
so if i use = as fields seperator, it will split into 3 as
abc def\ jkl
but as i have escaped 2nd "=" , my output should be as
abc def\=jkl
Can anyone please provide me any suggestion , if i can achieve this.
Thanks in advance
I find it simplest to just convert the offending string to some other string or character that doesn't appear in your input records (I tend to use RS if it's not a regexp* since that cannot appear within a record, or the awk builtin SUBSEP otherwise since if that appears in your input you have other problems) and then process as normal other than converting back within each field when necessary, e.g.:
$ cat file
abc=def\=jkl
$ awk -F= '{
gsub(/\\=/,RS)
for (i=1; i<=NF; i++) {
gsub(RS,"\\=",$i)
print i":"$i
}
}' file
1:abc
2:def\=jkl
* The issue with using RS if it is an RE (i.e. multiple characters) is that the gsub(RS...) within the loop could match a string that didn't get resolved to a record separator initially, e.g.
$ echo "aa" | gawk -v RS='a$' '{gsub(RS,"foo",$1); print "$1=<"$1">"}'
$1=<afoo>
When the RS is a single character, e.g. the default newline, that cannot happen so it's safe to use.
If it is like the example in your question, it could be done.
awk doesn't support look-around regex. So it would be a bit difficult to get what you want by setting FS.
If I were you, I would do some preprocessing, to make the data easier to be handled by awk. Or you could read the line, and using other functions by awk, e.g. gensub() to remove those = s you don't want to have in result, and split... But I guess you want to achieve the goal by playing field separator, so I just don't give those solutions.
However it could be done by FPAT variable.
awk -vFPAT='\\w*(\\\\=)?\\w*' '...' file
this will work for your example. I am not sure if it will work for your real data.
let's make an example, to split this string: "abc=def\=jkl=foo\=bar=baz"
kent$ echo "abc=def\=jkl=foo\=bar=baz"|awk -vFPAT='\\w*(\\\\=)?\\w*' '{for(i=1;i<=NF;i++)print $i}'
abc
def\=jkl
foo\=bar
baz
I think you want that result, don't you?
my awk version:
kent$ awk --version|head -1
GNU Awk 4.0.2

grep: how to show the next lines after the matched one until a blank line [not possible!]

I have a dictionary (not python dict) consisting of many text files like this:
##Berlin
-capital of Germany
-3.5 million inhabitants
##Earth
-planet
How can I show one entry of the dictionary with the facts?
Thank you!
You can't. grep doesn't have a way of showing a variable amount of context. You can use -A to show a set number of lines after the match, such as -A3 to show three lines after a match, but it can't be a variable number of lines.
You could write a quick Perl program to read from the file in "paragraph mode" and then print blocks that match a regular expression.
as andy lester pointed out, you can't have grep show a variable amount of context in grep, but a short awk statement might do what you're hoping for.
if your example file were named file.dict:
awk -v term="earth" 'BEGIN{IGNORECASE=1}{if($0 ~ "##"term){loop=1} if($0 ~ /^$/){loop=0} if(loop == 1){print $0}}' *.dict
returns:
##Earth
-planet
just change the variable term to the entry you're looking for.
assuming two things:
dictionary files have same extension (.dict for example purposes)
dictionary files are all in same directory (where command is called)
If your grep supports perl regular expressions, you can do it like this:
grep -iPzo '(?s)##Berlin.*?\n(\n|$)'
See this answer for more on this pattern.
You could also do it with GNU sed like this:
query=berlin
sed -n "/$query/I"'{ :a; $p; N; /\n$/!ba; p; }'
That is, when case-insensitive $query is found, print until an empty line is found (/\n$/) or the end of file ($p).
Output in both cases (minor difference in whitespace):
##Berlin
-capital of Germany
-3.5 million inhabitants

Unix sort the key of the combination of alphanumeric character and ':', '/'

I am trying sort the text file using the UNIX sort command (GNU 5.97 or 7.4) according to ASCII code. The lines in the file have a single column, which is used as the key in sort.
chr1:110170896:NM_004037:0:1:0/1
chr1:110170897:NM_004037:0:1:0/1
chr11:10325325:chr11:0:1:0/1
chr11::0325325:chr11:0:1:0/1
The ascii code of : is 58, and 1 is 49. However, when I sort the file with sort -k 1,1 temp.txt, the output is like this,
chr11::0325325:chr11:0:1:0/1
chr1:110170896:NM_004037:0:1:0/1
chr1:110170897:NM_004037:0:1:0/1
chr11:10325325:chr11:0:1:0/1
From the result, I have no idea how sort determines the order between 1 and :. If there were any fixed order, the first and the forth lines should be placed together.
Ideally, I hope to sort the key from the left character to the right character according to the ASCII code.
how about
sort -t : -k 1 filename
using the : as a field delimiter
From the man page for GNU sort:
* WARNING * The locale specified by the environment affects sort order. Set LC_ALL=C to get the traditional sort order that uses native byte values.
Using LC_ALL=C sort text (where text is a file where I copied your sample data) on my machine gives the sort order you want.
Still no explanation for why chr11 doesn't sort together in the original example though...
sort is locale sensitive. It will be affected by your locale setting.
You should try set the language to C to return to ASCII order.
Say run it as LANG=C sort -k 1,1 temp.txt or set your environment variable
If you need an explanation of the mis-order, it would be better to give your locale / LANG environment to dig out the reason.

Resources