Why is cert.pl complaining about ttags - acl2

Often I get the following error message when certifying ACL2 books:
| ACL2 Error in ( INCLUDE-BOOK "something" ...): The ttag :FAST-CAT
| associated with file /elided/acl2/books/std/string\
| s/fast-cat.lisp is not among the set of ttags permitted in the current
| context, specified as follows:
| NIL.
| See :DOC defttag.
What's wrong?

One needs to add a <something>.acl2 file (typically cert.acl2 works just fine) to the directory that contains the book you're trying to certify. This <something>.acl2 file needs to direct cert.pl that ttags are okay to use, e.g., with the following code:
(in-package "ACL2")
; cert-flags: ? t :ttags :all

Related

Fitnesse Ignore result of method

I am trying to run this fixture code:
|script |Browser Test |
|open |https://jsfiddle.net/ygjL7hnm/3/show|
|click |Run this fiddle |
|click if available|id=test |
|click if available|id=test2 |
|click if available|id=test3 |
|click if available|id=test4 |
|wait |2 |seconds |
The problem is I want to ignore the result of "click if available", meaning the test should be all green. I could do that with "reject". That would work perfectly fine but the problem is I don't know which buttons are missing or not missing. So I would have to write "reject" before every "click if available". That results in an error if the button is actually available. As you can see here:
But what I want is that no matter if the button is there or not, it should just try to click it and ignore the result of the method. It should not check wether it found the button or not. I hope its clear what I want, if not feel free to ask. Thanks.
Edit: Worth noting I am using this fitnesse version with the HSAC PlugIn.
For slim tests (which in this case, you are using), prepend the script table row with the show keyword. It will just output the function's return value. (true/false in case of 'click if available')
More info: http://fitnesse.org/FitNesse.UserGuide.WritingAcceptanceTests.SliM.ScriptTable
From that link:
If the word show is in the first cell, then it should be followed by a function. A new cell will be added when the test is run, and it will contain the return value of the function.
The script below should not fail on the execution of click if available:
|script |Browser Test |
|open |https://jsfiddle.net/ygjL7hnm/3/show |
|click |Run this fiddle |
|show |click if available |this does not exist |

Why can't I open a *.w file in the appBuilder?

I have a *.w file, referring to two include files ({incl\include_file.i}, {incl\do_something_file.i}). That first include-file contains the definition of a RECID variable "recordid":
DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
I am capable to compile the *.w file, the listing file looks as follows: (just a fragment)
Prompt>findstr "recordid do_something" listing.txt
...
1 x DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
...
1 x 1 {incl\do_something_file.i
2 x 1 INPUT-OUTPUT recordid
So, the compilation works. In top of that, I've checked the pairs of "&ANALYZE-SUSPEND" and "&ANALYZE-RESUME" clauses and everything is fine.
Nevertheless, I can't open the *.w file, as the mentioned RECID seems not to be known (errors 201 and 196).
Edit after first comments
This the exact error message I get while opening the *.w file, using the AppBuilder (I'm working with a Dutch version of the tool, hence the Dutch words in between):
---------------------------
Fout
---------------------------
This file cannot be analyzed by the AppBuilder.
Please check these problems in your file or environment:
** Onbekende veld- of variabelenaam - recordid. (201)
** .\incl\<do_something_file>.i Compilatiefout op regel 7. (196)
---------------------------
OK
---------------------------
Edit with more information on ANALYZE- clauses
I've launched following findstr command on my code with the following results:
Prompt>findstr /I "ANALYZE-RESUME ANALYZE-SUSPEND" <filename>.w
&ANALYZE-SUSPEND _VERSION-NUMBER ... GUI
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS ...
&ANALYZE-RESUME
...
I confirm that the number of &ANALYZE-SUSPEND clauses equals the number of &ANALYZE-RESUME clauses, they are in the right sequence (first a SUSPEND and then a RESUME) and none of them is commented out.
Does anybody have an idea what's going wrong?
The problem was caused by an include, being outside of an suspend resume clause, in order to solve such a situation the following command might be useful:
findstr /I "ANALYZE {incl" <source_file>.w
The result should look like the following:
...
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CONTROL C-Win
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _MAIN-BLOCK C-Win
{incl\something.i}
{incl\something_else.i}
&ANALYZE-RESUME
...
You see following rules:
The number of suspends and resumes must be equal.
Every suspend is to be closed by a resume.
Not one of those can be commented out.
It is advised to have includes between the suspend and the resume.

How do I flatten a large, complex, deeply nested JSON file into multiple CSV files a linking identifier

I have a complex JSON file (~8GB) containing publically available data for businesses. We have decided to split the files up into multiple CSV files (or tabs in a .xlsx), so clients can easily consume the data. These files will be linked by the NZBN column/key.
I'm using R and jsonlite to read a small sample in (before scaling up to the full file). I'm guessing I need some way to specify what key/columns go in each file (i.e, the first file will have headers: australianBusinessNumber, australianCompanyNumber, australianServiceAddress, the second file will have headers: annualReturnFilingMonth, annualReturnLastFiled, countryOfOrigin...)
Here's a sample of two businesses/entities (I've bunged some of the data as well so ignore the actual values): test file
I've read almost every post on s/o of similar questions and none seem to be giving me any luck. I've tried variations of purrr, *apply commands, custom flattening functions and jqr (an r version of 'jq' - looks promising but I can't seem to run it).
Here's an attempt at creating my separate files, but I'm unsure how to include the linking identifier (NZBN) + I keep running into further nested lists (i'm unsure how many levels of nesting there are)
bulk <- jsonlite::fromJSON("bd_test.json")
coreEntity <- data.frame(bulk$companies)
coreEntity <- coreEntity[,sapply(coreEntity, is.list)==FALSE]
company <- bulk$companies$entity$company
company <- purrr::reduce(company, dplyr::bind_rows)
shareholding <- company$shareholding
shareholding <- purrr::reduce(shareholding, dplyr::bind_rows)
shareAllocation <- shareholding$shareAllocation
shareAllocation <- purrr::reduce(shareAllocation, dplyr::bind_rows)
I'm not sure if it's easier to split the files up during the flattening/wrangling process, or just completely flatten the whole file so I just have one line per business/entity (and then gather columns as needed) - my only concern is that I need to scale this up to ~1.3million nodes (8GB JSON file).
Ideally I would want the csv files split every time there is a new collection, and the values in the collection would become the columns for the new csv/tab.
Any help or tips would be much appreciated.
------- UPDATE ------
Updated as my question was a little vague I think all I need is some code to produce one of the csv's/tabs and I replicate for the other collections.
Say for example, I wanted to create a csv of the following elements:
entityName (unique linking identifier)
nzbn (unique linking
identifier)
emailAddress__uniqueIdentifier
emailAddress__emailAddress
emailAddress__emailPurpose
emailAddress__emailPurposeDescription
emailAddress__startDate
How would I go about that?
i'm unsure how many levels of nesting there are
This will provide an answer to that quite efficiently:
jq '
def max(s): reduce s as $s (null;
if . == null then $s elif $s > . then $s else . end);
max(paths|length)' input.json
(With the test file, the answer is 14.)
To get an overall view (schema) of the data, you could
run:
jq 'include "schema"; schema' input.json
where schema.jq is available at this gist. This will produce a structural schema.
"Say for example, I wanted to create a csv of the following elements:"
Here's a jq solution, apart from the headers:
.companies.entity[]
| [.entityName, .nzbn]
+ (.emailAddress[] | [.uniqueIdentifier, .emailAddress, .emailPurpose, .emailPurposeDescription, .startDate])
| #csv
shareholding
The shareholding data is complex, so in the following I've used the to_table function defined elsewhere on this page.
The sample data does not include a "company name" field so in the following, I've added a 0-based "company index" field:
.companies.entity[]
| [.entityName, .nzbn] as $ix
| .company
| range(0;length) as $cix
| .[$cix]
| $ix + [$cix] + (.shareholding[] | to_table(false))
jqr
The above solutions use the standalone jq executable, but all going well, it should be trivial to use the same filters with jqr, though to use jq's include, it might be simplest to specify the path explicitly, as for example:
include "schema" {search: "~/.jq"};
If the input JSON is sufficiently regular, you
might find the following flattening function helpful, especially as it can emit a header in the form of an array of strings based on the "paths" to the leaf elements of the input, which can be arbitrarily nested:
# to_table produces a flat array.
# If hdr == true, then ONLY emit a header line (in prettified form, i.e. as an array of strings);
# if hdr is an array, it should be the prettified form and is used to check consistency.
def to_table(hdr):
def prettify: map( (map(tostring)|join(":") ));
def composite: type == "object" or type == "array";
def check:
select(hdr|type == "array")
| if prettify == hdr then empty
else error("expected head is \(hdr) but imputed header is \(.)")
end ;
. as $in
| [paths(composite|not)] # the paths in array-of-array form
| if hdr==true then prettify
else check, map(. as $p | $in | getpath($p))
end;
For example, to produce the desired table (without headers) for .emailAddress, one could write:
.companies.entity[]
| [.entityName, .nzbn] as $ix
| $ix + (.emailAddress[] | to_table(false))
| #tsv
(Adding the headers and checking for consistency,
are left as an exercise for now, but are dealt with below.)
Generating multiple files
More interestingly, you could select the level you want, and produce multiple tables automagically. One way to partition the output into separate files efficiently would be to use awk. For example, you could pipe the output obtained using this jq filter:
["entityName", "nzbn"] as $common
| .companies.entity[]
| [.entityName, .nzbn] as $ix
| (to_entries[] | select(.value | type == "array") | .key) as $key
| ($ix + [$key] | join("-")) as $filename
| (.[$key][0]|to_table(true)) as $header
# First emit the line giving all the headers:
| $filename, ($common + $header | #tsv),
# Then emit the rows of the table:
(.[$key][]
| ($filename, ($ix + to_table(false) | #tsv)))
to
awk -F\\t 'fn {print >> fn; fn=0;next} {fn=$1".tsv"}'
This will produce headers in each file; if you want consistency checking, change to_table(false) to to_table($header).

Dictionary as variable in Robot Framework: code runs ok but the IDE yields error

I'm trying to set up a dictionary as a variable (so I can use it as a Resource and access its values from another file) and there is something that is driving me crazy.
Here is the code I have (just for testing purposes):
*** Settings ***
Documentation Suite description
Library Collections
*** Variables ***
&{SOME DICT} key1=value1 key2=value2
*** Test Cases ***
Dict Test # why $ instead of &?
${RANDOM VAR}= Get From Dictionary ${SOME DICT} key1
Log ${RANDOM VAR} WARN
If I run that, I got the expected result ([ WARN ] value1) BUT the IDE (PyCharm) is complaining about that ${SOME DICT} variable is not defined, and the dictionary declaration is not highlighted the same as variable or a list.
If I change that to &{SOME DICT} the IDE won't complain anymore, but the test fails with the following output:
Dict Test | FAIL |
Keyword 'Collections.Get From Dictionary' got positional argument after named arguments.
That is puzzling me to no end: why I have to use a $ instead of a & if it's a dictionary to make it work? Is there something I am doing wrong and it is just running by luck?
Thanks for any advice or guidance you may have!
Have a look into "Get from Dictionary" libdoc,looks like example is showing the same as your working snippet:
Name: Get From Dictionary
Source: Library (Collections)
Arguments: [dictionary, key]
Returns a value from the given ``dictionary`` based on the given ``key``.
If the given ``key`` cannot be found from the ``dictionary``, this
keyword fails.
The given dictionary is never altered by this keyword.
Example:
| ${value} = | Get From Dictionary | ${D3} | b |
=>
| ${value} = 2
Keyword implementation details are as follows:
try:
return dictionary[key]
except KeyError:
raise RuntimeError("Dictionary does not contain key '%s'." % key)
So indeed, Robot sends representation of dict content and not dict name thus value for key can be returned.
This is the same as direct call in python:
a = {u'key1': u'value1', u'key2': u'value2'}
print(a['key1'])
In the end, libdoc for that KW is not straightforward but your PyCharm plugin for Robot does not work properly in this case.
In RED Robot Editor (Eclipse based), proper case does not rise any warnings in editor, wrong-case provides error marker about arguments (better but still not clear what is exactly wrong. Blame minimalistic libdoc info).
ps. I am lead of RED project to be clear.
Simple Example to Use Key Value Variable in robot framework
Set value to dictionary
Get value from dictionary
&{initValues} Create Dictionary key1=value1 key2=value2
Set To Dictionary ${initValues} key1=newvalue1
Set To Dictionary ${initValues} key2=newvalue2
Set To Dictionary ${initValues} key3=newvalue3
${value} Get From Dictionary ${intialValues} key1

Robot Framework: Start Process with arguments on Windows?

I'm quite new with Robot Framework, and I cannot find a way to run a process with arguments on windows. I am quite sure I did not understand the documentation and there is a simple way of doing that though...
Ok, let's say I can start my program using this command:
c:\myappdir>MyApp.exe /I ..\params\myAppParams.bin
How to do that in RF?
Any kind of help would be appreciated.
Thank you very much :)
Edit 1:
Here is a piece of my code:
| *Setting* | *Value*
| Resource | compilationResource.robot
#(Process lib is included in compilationResource)
#I removed the "|" for readability
...
TEST1
...
${REPLAYEXEDIR}= get_replay_exe_dir #from a custom lib included in compilationResource
${EXEFULLPATH}= Join Path ${WORKSPACEDIR} ${REPLAYEXEDIR} SDataProc.exe
Should Exist ${EXEFULLPATH}
${REPLAYLOGPATH}= Join Path ${WORKSPACEDIR} ReplayLog.log
${REPLAYFILEPATH}= Join Path ${WORKSPACEDIR} params params.bin
Should Exist ${REPLAYFILEPATH}
Start Process ${EXEFULLPATH} stderr=${REPLAYLOGPATH} stdout=${REPLAYLOGPATH} alias=replayjob
Process Should Be Running replayjob
Terminate Process replayjob
Process Should Be Stopped replayjob
This works. As soon as I try to include the arguments like this:
Start Process ${EXEFULLPATH} ${/}I ${REPLAYFILEPATH} stderr=${REPLAYLOGPATH} stdout=${REPLAYLOGPATH} alias=replayjob
I get this error:
WindowsError: [Error 2] The system cannot find the file specified
and this error comes from the start process line.
Let me know if I was unclear or if nmore info is needed.
Thank you all for your help on this.
Edit 2: SOLUTION
Each argument must be separated form the other (when not running in shell) with a double space. I was not using double spaces, hence the error.
| | Start Process | ${EXEFULLPATH} | /I | ${REPLAYFILEPATH} | stderr=${REPLAYLOGPATH} | stdout=${REPLAYLOGPATH} | alias=replayjob
To launch your program from a Robot Framework Test, use the Process library like:
*** Settings ***
Library Process
*** Test Cases ***
First test
Run Process c:${/}myappdir${/}prog.py /I ..\params\myAppParams.bin
# and then do some tests....

Resources