I am relatively new to tcl dictionaries and don't see a good documentation on how to initialize an empty dictionary, loop over a log and save data into it. Finally I want to print a table that looks like this:
- Table:
HEAD1
Step 1 Start Time End Time
Step 2 Start Time End Time
**
- Log:
**
HEAD1
Step1
Start Time : 10am
.
.
.
End Time: 11am
Step2
Start Time : 11am
.
.
End time : 12pm
HEAD2
Step3
Start Time : 12pm
.
.
.
End Time: 1pm
Step4
Start Time : 1pm
.
.
End time : 2pm
You really don't have to initialise an empty dictionary in Tcl - you can simply start using it and it will get populated as you go along. As mentioned already, dict man page is the best way to start.
Additionally, I would suggest you check the regexp man page as you can use it nicely to parse your text file.
Not having anything better to do atm, I cobbled together a short sample code that should get you started. Use it as a starting tip, adjust it to your particular log layout and add some defensive measures to prevent errors from unexpected input.
# The following line is not strictly necessary as Tcl does not
# require you to first create an empty dictionary.
# You can simply start using 'dict set' commands below and the first
# one will create a dictionary for you.
# However, declaring something as a dict does add to code clarity.
set tableDict [dict create]
# Depending on your log sanity, you may want to declare some defaults
# so as to avoid errors in case the log file misses one of the expected
# lines (e.g. 'HEADx' is missing).
set headNumber {UNKNOWN}
set stepNumber {UNKNOWN}
set start {UNKNOWN}
set stop {UNKNOWN}
# Now read the file line by line and extract the interesting info.
# If the file indeed contains all of the required lines and exactly
# formatted as in your example, this should work.
# If there are discrepancies, adjust regex accordingly.
set log [open log.txt]
while {[gets $log line] != -1} {
if {[regexp {HEAD([0-9]+)} $line all value]} {
set headNumber $value
}
if {[regexp {Step([0-9]+)} $line all value]} {
set stepNumber $value
}
if {[regexp {Start Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
}
# NOTE: I am assuming that your example was typed by hand and all
# inconsistencies stem from there. Otherwise, you have to adjust
# the regular expressions as 'End Time' is written with varying
# capitalization and with inconsistent white spaces around ':'
if {[regexp {End Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
# NOTE: This short example relies heavily on the log file
# being formatted exactly as described. Therefore, as soon
# as we find 'End Time' line, we assume that we already have
# everything necessary for the next dictionary entry
dict set tableDict HEAD$headNumber Step$stepNumber StartTime $start
dict set tableDict HEAD$headNumber Step$stepNumber EndTime $stop
}
}
close $log
# You can now get your data from the dictionary and output your table
foreach head [dict keys $tableDict] {
puts $head
foreach step [dict keys [dict get $tableDict $head]] {
set start [dict get $tableDict $head $step StartTime]
set stop [dict get $tableDict $head $step EndTime]
puts "$step $start $stop"
}
}
Related
I want to know how the below command is working.
awk '/Conditional jump or move depends on uninitialised value/ {block=1} block {str=str sep $0; sep=RS} /^==.*== $/ {block=0; if (str!~/oracle/ && str!~/OCI/ && str!~/tuxedo1222/ && str!~/vprintf/ && str!~/vfprintf/ && str!~/vtrace/) { if (str!~/^$/){print str}} str=sep=""}' file_name.txt >> CondJump_val.txt
I'd also like to know how to check the texts Oracle, OCI, and so on from the second line only.
The first step is to write it so it's easier to read
awk '
/Conditional jump or move depends on uninitialised value/ {block=1}
block {
str=str sep $0
sep=RS
}
/^==.*== $/ {
block=0
if (str!~/oracle/ && str!~/OCI/ && str!~/tuxedo1222/ && str!~/vprintf/ && str!~/vfprintf/ && str!~/vtrace/) {
if (str!~/^$/) {
print str
}
}
str=sep=""
}
' file_name.txt >> CondJump_val.txt
It accumulates the lines starting with "Conditional jump ..." ending with "==...== " into a variable str.
If the accumulated string does not match several patterns, the string is printed.
I'd also like to know how to check the texts Oracle, OCI, and so on from the second line only.
What does that mean? I assume you don't want to see the "Conditional jump..." line in the output. If that's the case then use the next command to jump to the next line of input.
/Conditional jump or move depends on uninitialised value/ {
block=1
next
}
perhaps consolidate those regex into a single chain ?
if (str !~ "oracle|OCI|tuxedo1222|v[f]?printf|vtrace") {
print str
}
There are two idiomatic awkisms to understand.
The first can be simplified to this:
$ seq 100 | awk '/^22$/{flag=1}
/^31$/{flag=0}
flag'
22
23
...
30
Why does this work? In awk, flag can be tested even if not yet defined which is what the stand alone flag is doing - the input is only printed if flag is true and flag=1 is only executed when after the regex /^22$/. The condition of flag being true ends with the regex /^31$/ in this simple example.
This is an idiom in awk to executed code between two regex matches on different lines.
In your case, the two regex's are:
/Conditional jump or move depends on uninitialised value/ # start
# in-between, block is true and collect the input into str separated by RS
/^==.*== $/ # end
The other 'awkism' is this:
block {str=str sep $0; sep=RS}
When block is true, collect $0 into str and first time though, RS should not be added in-between the last time. The result is:
str="first lineRSsecond lineRSthird lineRS..."
both depend on awk being able to use a undefined variable without error
how can i show the output to each other? At the moment everything is in a row
bind pub "-|-" !sql pub:test:sql
proc pub:test:sql{ nick host handle channel arg } {
set name "%test%"
sqlite3 pre test.db
set result [pre eval {SELECT * FROM pre WHERE rlsname LIKE $name}]
if {$result == ""} {
putnow "PRIVMSG $channel :empty"
} else {
putnow "PRIVMSG #test :result $result"
set id [lindex [split $result] 0]
set outname [lindex [split $result] 1]
set time [lindex [split $result] 2]
putnow "PRIVMSG #test :$outname $time"
}
}
at the moment the result looks like this :
[09.02.20/21:00:43:243] <testbot> result 4 1.test.1 1581256802 160 2.test.2 1581262727
[09.02.20/21:00:43:243] <testbot> output 1.test.1 1581256802
and this is how it should look, among each other :
[09.02.20/21:00:43:243] <testbot> result 4 1.test.1 1581256802
[09.02.20/21:00:43:243] <testbot> result 160 2.test.2 1581262727
Thank you very much for your help
Regards
If you wish to process the selected rows one row at a time, one option is to use optional arguments to the eval command to specify a variable name and script. In that form of the eval command, each row is assigned to the variable as an array and the script is executed. In your case:
per eval {select * from PER ...} per_result {
puts "$per_result(somecolumn1) $per_result(somecolumn2)"
}
or something like that where the array indices are column names.
See the eval manual page for more details and examples.
I have a problem where I have a list of fields from a table (not static, can be modified by user), and I need to generate a report using these user selected fields. The report can show all the rows, no need for aggregation or filtering.
I thought I could create a report layout then using a filemaker script to populate it but can't seem to find the right commands, can someone let me know how I could achieve this?
I'm using filemaker pro 18 advanced
Thanks in advance!
EDIT: Since you want a dynamic report, then I recommend you look up a technique called "Virtual List" for rendering the data.
Here's an example script that iterates over a found set of records and builds the virtual list data in a variable (it doesn't show how to render it though):
# Field names and delimiter
Set Variable [ $delim ; Value: Char(9) // tab character ]
# Set these dynamically with a script parameter
Set Variable [ $fields ; Value: List ( "Contacts::nameFirst" ; "Contacts::nameCompany" ; "Contacts::nameLast" ) ]
Set Variable [ $fieldCount ; Value: ValueCount ( $fields ) ]
Go to Layout [ “Contacts” (Contacts) ; Animation: None ]
Show All Records
Go to Record/Request/Page [ First ]
# Loop over all the records and append a row in the $data variable for each
Set Variable [ $data ; Value: "" ]
Loop
# Get the delimited field values
Set Variable [ $i ; Value: 0 ]
Set Variable [ $row ; Value: "" ]
Loop
Exit Loop If [ Let ( $i = $i + 1 ; $i > $fieldCount ) ]
Set Variable [ $value ; Value: GetField ( GetValue ( $fields ; $i ) ) ]
Insert Calculated Result [ Target: $row ; If ( $i > 1 ; $delim ) & $value ]
End Loop
enter code here
# Append the new row of data to the list variable
Insert Calculated Result [ Target: $data ; If ( Get ( RecordNumber ) > 1 ; ¶ ) & $row ]
Go to Record/Request/Page [ Next ; Exit after last: On ]
End Loop
# Save to a global variable to show in a virtual list layout
Set Variable [ $$DATA ; Value: $data ]
Exit Script [ Text Result: ]
please note this code is just one of many possible formats the virtual list can take. A lot of people, myself included, prefer to use JSON objects or arrays for each row of the list since it automatically handle field values with carriage returns. This is sort of the old-fashioned way. Kevin Frank at FileMaker Hacks has some good recent articles about virtual list techniques if you're interested.
PS, another great technique for rendering table data dynamically is to collect the data in a JSON array and render it in a webviewer with https://datatables.net/
I did something like this for the oncology department of UM om 1980 or so using 4th Dimension and a new plug in that used one line of code to create a web browser with all the functions that a doctor might want. The data was placed inside a variable as it was sent/returned and 4D could use a variable in the report to display the data.
FileMaker does not have this ability built in as 4D did so you will have to do it yourself.JSON is the most likely tool that I am familiar with. YouTube has many videos on JSON.
You have two classes of variables for your report: Column headers and column data to display. Fortunately Filemaker is quite good and very easy to design. Just make a typical report and replace the text/header or field names with a JSON variable or any. $ColumnName = JSON variable.
Create a JSON calculated field in the database. In that calculated field set the JSON variable and this can be used for all of the columns.
This is the essence of the idea with the final result to be determined by you. What you are asking for is not easy and would require serious work by a skilled JSON scripter.
I am creating scripts which will store the contents of pipe delimited file. Each column is stored in a separate array. I then read the information from the arrays and process it. There are 20 pipe delimited files and I need to write 20 scripts. The processing that will happen in each script after the information is stored in the array is different. The number of columns in each pipe delimited file is different (but in no case it would be more than 9 columns). I need to do this activity of storing the information in the array in the beginning of each script. The way I am doing it at present is given below. I want help from you to understand how can I write a function to do this activity.
cat > example_file.txt <<End-of-message
some text first row|other text first row|some other text first row
some text nth row|other text nth row|some other text nth row
End-of-message
# Note that example_file.txt will available. I have created it inside the script just to let you know the format of the file
OIFS=$IFS
IFS='|'
i=0
while read -r first second third ignore
do
first_arr[$i]=$first
second_arr[$i]=$second
third_arr[$i]=$third
(( i=i+1 ))
done < example_file.txt
IFS=$OIFS
Here is a sort-of minimal change to your script that should get you further...
...
...
while read -r first second third ignore
do
arr0[$i]=$first
arr1[$i]=$second
arr2[$i]=$third
(( i=i+1 ))
done < example_file.txt
IFS=$OIFS
proc0 () {
for j in "$#"; do
echo proc0 : "$j"
done
}
proc1 () {
echo proc1
}
proc2 () {
echo proc2
}
for i in 0 1 2; do
t=arr$i'[#]'
proc$i "${!t}"
done
Hi I want to delete a line from a file which matches particular pattern
the code I am using is
BEGIN {
FS = "!";
stopDate = "date +%Y%m%d%H%M%S";
deletedLineCtr = 0; #diagnostics counter, unused at this time
}
{
if( $7 < stopDate )
{
deletedLineCtr++;
}
else
print $0
}
The code says that the file has lines "!" separated and 7th field is a date yyyymmddhhmmss format. The script deletes a line whose date is less than the system date. But this doesn't work. Can any one tell me the reason?
Is the awk(1) assignment due Tuesday? Really, awk?? :-)
Ok, I wasn't sure exactly what you were after so I made some guesses. This awk program gets the current time of day and then removes every line in the file less than that. I left one debug print in.
BEGIN {
FS = "!"
stopDate = strftime("%Y%m%d%H%M%S")
print "now: ", stopDate
}
{ if ($7 >= stopDate) print $0 }
$ cat t2.data
!!!!!!20080914233848
!!!!!!20090914233848
!!!!!!20100914233848
$ awk -f t2.awk < t2.data
now: 20090914234342
!!!!!!20100914233848
$
call date first to pass the formatted date as a parameter:
awk -F'!' -v stopdate=$( date +%Y%m%d%H%M%S ) '
$7 < stopdate { deletedLineCtr++; next }
{print}
END {do something with deletedLineCrt...}
'
You would probably need to run the date command - maybe with backticks - to get the date into stopDate. If you printed stopDate with the code as written, it would contain "date +...", not a string of digits. That is the root cause of your problem.
Unfortunately...
I cannot find any evidence that backticks work in any version of awk (old awk, new awk, GNU awk). So, you either need to migrate the code to Perl (Perl was originally designed as an 'awk-killer' - and still includes a2p to convert awk scripts to Perl), or you need to reconsider how the date is set.
Seeing #DigitalRoss's answer, the strftime() function in gawk provides you with the formatting you want (check 'info gawk' as I did).
With that fixed, you should be getting the right lines deleted.