irony: how come some things generate AST and some dont - abstract-syntax-tree

SQL sample just generates token tree
But most samples on how to use irony all go 'parse' then 'traverse generated ast'

The AST is only generated if the LanguageFlags.CreateAst flag is set on the grammar.

Related

Searching a DICOM server for metadata

I want to search the DICOM server. if for example user enters a patient id to serach, then my app populate a table with all the metadata relating to that id , such as id, name, accession number e.tc. if tha study id exists in the dicom server, How can this be done using dcm4chee kit. –
You can use dcm4che3 tool dcm4che-tool-findscu. This code shows you how to do a C-FIND against a PACS (or whatever implementing C-FIND as SCP).
FindSCU.java is quite clear, take a while and don't get missed through Apache Commons CLI code to understand input from console. Most of CLI management code is not in this project, but you can find it in the dcm4che3 tool dcm4che-tool-common project, org.dcm4che3.tool.common.CLIUtils.java class.
Take into account following considerations:
Specify the search level of Query/Retrieve. You can use several search levels in order to match attributes into a PACS. If you look at lines 260:265 of FindSCU.java, you will see that you can manage four different levels: PATIENT|STUDY|SERIES|IMAGE.This will instruct C-FIND SCP how to search matching attributes.
Tell C-FIND SCP what attributes do you want to retrieve. If you want to search studies to be retrieved later, you must ask for 0020, 000D Study​InstanceUID tag.
Of course, add all attributes that you want to populate your table.
Use retrieved 0020, 000D Study​InstanceUID tag value to do the C-GET/C-MOVE operation.
You can see how to configure attribute keys to do C-FIND SCU into CLIUtils.java class that is part of dcm4che3 tool dcm4che-tool-common project. See CLIUtils.addAttributes(Attributes, String[]).
Hope it helps!
Edit
Due to you comment you are using dcm4che2 and that you already have a DicomObject with the search result, if you want to obtain metadata from this DicomObject you must parse it before, using DicomInputStream, and then you can use getXXXX(Tag) from BasiDicomObject, something like this:
DicomObject dcmObj;
DicomInputStream dis = null;
dis = new DicomInputStream(file);
dcmObj = dis.readDicomObject();
String someVar = dcmObj.getString(Tag.SeriesInstanceUID);
Keep in mind, some attributes are inside sequences, and thus you have to search it before.
You can also take a look into dcm4che-tool-dcm2txt, you will see Dcm2Txt.java and in lines 170 and so on, there is how to parse whole dicom object.
If you need some general description about the DICOM network protocol, you could read the "Understanding DICOM with Orthanc" guide, and more specifically the section about C-Find.

AND\OR Query search using MarkLogic (XQuery or equivalent)

I am new to MarkLogic and we are evaluating MarkLogic for our product use case.
We evaluated few NoSQL databases like MongoDB, Couchbase etc.
I am looking for a below type of query search.
(Condition1 OR Condition2) AND (Condition3 OR Condition4) AND (Condition5)
Can MarkLogic provide such type of search query?
I am just started learning MarkLogic and trying to understand the architecture.
Thanks,
Sameer
Yes, MarkLogic provides some high level libraries for this type of functionality. Take a look at Search API.
Start here: https://developer.marklogic.com/learn/2009-07-search-api-walkthrough
And more thorough documentation is here: https://docs.marklogic.com/guide/search-dev/search-api
MarkLogic can handle this kind of logic in many ways as mentioned.
For example, this is how you could setup a search query using the CTS library (I highly recommend the CTS library, since it uses indexes much better, and the construction of them are so much more flexible):
cts:search(//elementName,
cts:and-query((
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("private"), "true"),
cts:or-query((
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("forced"), "false"),
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("forced"), "pending")
))
))
)
This snippet shows both AND and OR logic. The cts:and-query() and cts:or-query() functions can take a list of nodes. The above query says: "Find an element (called elementName) that has an attribute of private='true' AND has either one of the following: forced='true' or forced='pending'".
For much simpler data, you can use xQuery predicates by doing something like the following:
for $node in $xml/parent/child[#param1 eq "test" AND #param2 eq "OK"]/grandchild[#service eq "yahoo" or #service eq "google"]
return $node
The short answer to the original question is "yes". The details of "how" will depend on the approach used to express the queries.
The reference architecture recommends a three-tier approach using the Java or Node.js Client APIs if you use one of those, or HTTP calls to the REST API if you use a different language in your middle tier.
You can also use the Search API (as mentioned by wst) if you're working in MarkLogic's application server (typically as a two-tier architecture). You can do that with either XQuery or Server-side JavaScript, as of MarkLogic 8.

Get AST from .Net assembly without source code (IL code)

I'd like to analyze .Net assemblies to be language independent from C#, VB.NET or whatever.
I know Roslyn and NRefactory but they only seem to work on C# source code level?
There is also the "Common Compiler Infrastructure: Code Model and AST API" project on CodePlex which claims to "supports a hierarchical object model that represents code blocks in a language-independent structured form" which sound exactly for what I looking for.
However I'am unable to find any useful documentation or code that is actual doing this.
Any advice how to archive this?
Can Mono.Cecil maybe doing something?
You can do this and there is also one (although tiny) example of this in the source of ILSpy.
var assembly = AssemblyDefinition.ReadAssembly("path/to/assembly.dll");
var astBuilder = new AstBuilder(new DecompilerContext(assembly.MainModule));
decompiler.AddAssembly(assembly);
astBuilder.SyntaxTree...
The CCI Code Model is somewhere between a IL disassembler and full C# decompiler: it gives your code some structure (e.g. if statements and expressions), but it also contains some low level stack operations like push and pop.
CCI contains a sample that shows this: PeToText.
For example, to get Code Model for the first method of the Program type (in the global namespace), you could use code like this:
string fileName = "whatever.exe";
using (var host = new PeReader.DefaultHost())
{
var module = (IModule)host.LoadUnitFrom(fileName);
var type = (ITypeDefinition)module.UnitNamespaceRoot.Members
.Single(m => m.Name.Value == "Program");
var method = (IMethodDefinition)type.Members.First();
var methodBody = new SourceMethodBody(method.Body, host, null, null);
}
To demonstrate, if you decompile the above code and show it using PeToText, you're going to get:
Microsoft.Cci.ITypeDefinition local_3;
Microsoft.Cci.ILToCodeModel.SourceMethodBody local_5;
string local_0 = "C:\\code\\tmp\\nuget tmp 2015\\bin\\Debug\\nuget tmp 2015.exe";
Microsoft.Cci.PeReader.DefaultHost local_1 = new Microsoft.Cci.PeReader.DefaultHost();
try
{
push (Microsoft.Cci.IModule)local_1.LoadUnitFrom(local_0).UnitNamespaceRoot.Members;
push Program.<>c.<>9__0_0;
if (dup == default(System.Func<Microsoft.Cci.INamespaceMember, bool>))
{
pop;
push Program.<>c.<>9.<Main0>b__0_0;
Program.<>c.<>9__0_0 = dup;
}
local_3 = (Microsoft.Cci.ITypeDefinition)System.Linq.Enumerable.Single<Microsoft.Cci.INamespaceMember>(pop, pop);
local_5 = new Microsoft.Cci.ILToCodeModel.SourceMethodBody((Microsoft.Cci.IMethodDefinition)System.Linq.Enumerable.First<Microsoft.Cci.ITypeDefinitionMember>(local_3.Members).Body, local_1, (Microsoft.Cci.ISourceLocationProvider)null, (Microsoft.Cci.ILocalScopeProvider)null, 0);
}
finally
{
if (local_1 != default(Microsoft.Cci.PeReader.DefaultHost))
{
local_1.Dispose();
}
}
Of note are all those push, pop and dup statements and the lambda caching condition.
As far as I know, it's not possible to build AST from binary (without sources) since AST itself generated by parser as part of compilation process from sources.
Mono.Cecil won't help because you can only modify opcodes/metadata with them, not analyze assembly.
But since it's .NET you can dump IL code from dll with help of ildasm. Then you can pass generated sources to any parser with CIL dictionary hooked up and get AST from parser. The problem is that as far as I know there is only one publically available CIL grammar for parser, so you don't really have a choice. And ECMA-355 is big enough so it's bad idea to write your own grammar.
So I can suggest you only one solution:
Pass assembly to ildasm.exe to get CIL.
Then pass CIL to ANTLR v3 parser with this CIL grammar wired up (note it's a little bit outdated - grammar created at 2004 and latest CIL specification is 2006, but CIL doesn't really change to much)
After that you can freely access AST generated by ANTLR
Note that you will need ANTLR v3 not v4, since grammar written for 3rd version, and it's hardly possible to port it to v4 without good knowledge of ANTLR syntax.
Also you can try to look into new Microsoft ryujit compiler sources at github (part of CoreCLR) - I don't sure that it's helps, but in theory it must contains CIL grammar and parser implementations since it works with CIL code. But it's written in CPP, have enormous code base and lacks of documentation since it's in active development stage, so it's may be easier to stuck with ANTLR.
If you treat the .net binary file as a stream of bytes, you ought to be able to "parse" it just fine.
You simply write a grammar whose tokens are essentially bytes. You can certainly build a classical lexer/parser with almost any set of lexer/parser tools by defining the lexer to read single bytes as tokens.
You can then build the AST using standard AST-building machinery for the parsing engine (on your own for YACC, automatically with ANTLR4).
What you will discover, of course, is that "parsing" isn't enough; you'll still need to build symbol tables, and carry out control and data flow analyses if you are going to do serious analysis of the corresponding code. See my essay on LifeAfterParsing.
You will also likely have to take into account "distinguished" functions that provide key runtime facilities to the particular programming languages that actually generated the CIL code. And these will make your analyzers language-dependent. Yes, you still get to share the part of the analysis that works on generic CIL.

Extracting Requirements folder Tree structure from QC using API

I am trying to extract requirements from QC Requirement module. i could extract all requirements of a QC project but i would like to extract selected requirements only. So i need to give folder path and extract requirements accordingly.
Currently i use ReqFactory to extract Reqs from QC. Could you please help me or give me idea to extract requirmeents from selected folder path.
I tried Req Path and father id, but still it does not fulfill my need as some may have multiple sub folders under parent folders.
I assume you like to get all the child requirements of a requirement using the OTA API? The only solution I can offer is a bit clumsy. First you have to get the requirement where you want to start, e.g. "Requirements\Projects\ProjectX". How to achieve that is described in the OTA API Reference as an example of the ReqFactory object ("Find a specified requirement in a specified folder"). Or it is posted in this forum. If you know the ID of the start-requirement you can simply get the requirement with req_factory.Item(id).
When you have your requirement where you want to start, you can use the Find-method of the ReqFactory to get all its children, resp. all Requirement objects starting with the same path as the start-requirement. Here is an example-method in Ruby:
def list_all_child_requirements(start_req)
req_factory = #tdc.ReqFactory
req_path_strange_format = start_req.Field("RQ_REQ_PATH")
child_req_list = req_factory.Find(start_req.ID, "RQ_REQ_PATH", req_path_strange_format, 8)
child_req_list.each do |list_req|
puts list_req
end
end
The req_path_strange_format contains a String in the strange Quality Center notation like "AAAAAB". The Find-method starts from the start-requirement and searches all requirements which path starts with the same path as the path of the start-requirement. The parameter 8 means "starts with pattern" (described in the API Reference, Enum tagTDAPI_REQMODE). I just don't know how to access the Enum using Ruby, thats why the magic 8 is used... The Find-method returns a list with format "ID,NAME". From there it should be no problem to extract the requirements.
Doing the same directly in QC with a VAPI-XP-TEST and VB looks like that:
TDOutput.Clear
Dim reqPathStrangeFormat
Set reqF = tdConnection.ReqFactory
Set startReq = reqF.Item(14) ' ID of parent requirement
reqPathStrangeFormat = startReq.Field("RQ_REQ_PATH")
TDOutput.Print reqPathStrangeFormat
Set childReqList = reqF.Find(startReq.ID, "RQ_REQ_PATH", reqPathStrangeFormat, TDREQMODE_FIND_START_WITH)
For Each childReq in childReqList
TDOutput.Print childReq
Next
This code first prints some strange string "AAAAAB" or something similiar, then a list with "ID,NAME" of the requirements.

Is it possible to intermix Modular templating and legacy VBScript CT?

In particular, the case I have in mind is this:
##RenderComponentPresentation(Component, "<vbs-legacy-ct-tcm-uri>")##
The problem I'm having is that in my case VBS code breaks when it tries to access component fields, giving "Error 13 Type mismatch ..".
(So, if I were to give the answer, I'd say: "Partially, of no practical use")
EDIT
The DWT above is from another CT, so effectively it's a rendering of component link, that's why parameterless overload as per Nuno's suggestion won't work unfortunately. BTW, the following lines inside VBS don't break and give correct values:
WriteOut Component.ID
WriteOut Component.Schema.Title
EDIT 2
Dominic was absolutely wright: it's a missing dependencies.
A bit more insight to make this info generally useful:
Suppose, the original CT looked like this ("VBScript [Legacy]" type):
[%
Call RenderComponent(Component)
%]
This CT was meant to be called from a PT, also VBS-based. That PT had a big chunk of "#include" statements in the beginning.
Now the story changes: the same CT is being called from another, DWT-based, CT. Obviously (thanks you all for your invaluable help!), dependencies are now not being included anywhere.
The solution to make original CT working again is to explicitly hand-pick and include all necessary VBS TBBs, so the original CT becomes:
[%
#include "tcm:<uri-of-vbs-tbb>"
Call RenderComponent(Component)
%]
Yes - it's perfectly possible to mix and match legacy and modular templates. Perhaps obviously, you can't mix and match template building blocks between the two techniques.
In VBScript "Error 13 Type mismatch" is sometimes used as a secret code that really means "I don't recognise the name of one of your variables, (including the names of Functions and Subs)" In the VBScript templating engine, variables from the page template could be in scope in your component template; it was very common, for example, to put the #includes in the PT so they could be used by the CT. My guess is that your component template is trying to use such a Function, and not finding it.
I know that you can render a Modular Page Template with VBScript Component Presentations, and also a VbScript page template can render a modular Component Template.
Your error is possibly due to something else? Have you tried just using the regular ##RenderComponentPresentation()## call without specifying which template?
The Page Template can render Compound Templates of different flavors - for example Razor, VBS, or XSLT.
The problem comes from the TBBs included in the Templates. Often the Razor templates will need to call functions that only exist in VBScript. So, the starting point when migrating templates is always to start with the helper functions and utility libraries. Then migrate the most generic PT / CT you have to the new format (Razor, XSLT, DWT, etc). This provides a nice basis to migrate the rest of the Templates as you have time to the new format.

Resources