JavaParser SymbolSolver and JGit to find method qualified name for a given commit - jgit

Consider the following code:
mt.resolve().getQualifiedSignature();
Here mt is of type MethodDeclaration, and it might come from a MethodCallExpr.
Now in order it to work accurately, I need to set the following:
CombinedTypeSolver combinedTypeSolver = new CombinedTypeSolver();
combinedTypeSolver.add(new ReflectionTypeSolver());
combinedTypeSolver.add(new JavaParserTypeSolver(new File("src/java1/")));
combinedTypeSolver.add(new JavaParserTypeSolver(new File("src/java2/
")));
This is easy, except I have two issues considering my scenario.
1) I can not set the root source directories manually. I need to find them automatically.
2) I can not give path like the above, because I am using jGit to checkout to different commits. So the paths are not fixed and might vary based on the different checkout commits. So the path should be accessed using JGit tree.
Any help would be highly appreciated.

Related

Tosca: searching / bulk renaming Test Configuration Parameters

I can't find a way to search for TCPs / search TCP usages / renaming all TCPs.
Let's assume I have a 'licensePlate' TCP set up on the highest level of the hierarchy, and that I have 2 subfolders. In one of them I use the value as it is, in the other folder I change the value. I have some libraries using 'licensePlate'.
I then proceed to rename the TCP to 'carId' on the highest level (and in the libraries). The folder which inherited it will be updated. But the other one will now have two TCPs. This is illustrated in the figure below.
So at the moment I need to manually go into all my subfolders/testcases, find all of them where 'licensePlate' was re-configured, and: (1) set the value to the new param ('carId'); (2) delete the old param ('licensePlate').
The logic behind this imho is that I may still be using that param name (e.g. if I resolved my libraries). Still, I'm guessing that there must be a way to bulk-rename or at least to search for TCP usages (?)
That is really tricky and abstruse. You can find TCP usage with following TQL (Home - Search - TQL Search tab)
=>SUBPARTS[(param_name!="")]
where "param_name" is the name of your parameter.
And it seems that only usages are being found where values has been changed and are not default values.

Pyvmomi - Assign VM to specific folder with non-unique name

I'm trying to figure out how to assign a VM to a folder that does not contain a unique name. I'm currently testing with the clone_vm.py template. With the sample, I have the ability to set the folder, but it does not work correctly if there's nested folders with the same name (example below). I would like to make sure the folder assigned is the "Linux/Dev" folder, but I can only pass "Dev" and hope that it picks the right one. The line of code below is how the folder is being set.
destfolder = get_obj(content, [vim.Folder], vm_folder)
Linux
|------Dev
|------Prod
Windows
|------Dev
|------Prod
Thanks!
The best way to do that is to use a search_index.FindByInventoryPath and get the folder by the path. It can be a little confusing because of hidden folders but the MOB can help you. I answered a question where I covered how to use that search method see this answer.

Extracting Requirements folder Tree structure from QC using API

I am trying to extract requirements from QC Requirement module. i could extract all requirements of a QC project but i would like to extract selected requirements only. So i need to give folder path and extract requirements accordingly.
Currently i use ReqFactory to extract Reqs from QC. Could you please help me or give me idea to extract requirmeents from selected folder path.
I tried Req Path and father id, but still it does not fulfill my need as some may have multiple sub folders under parent folders.
I assume you like to get all the child requirements of a requirement using the OTA API? The only solution I can offer is a bit clumsy. First you have to get the requirement where you want to start, e.g. "Requirements\Projects\ProjectX". How to achieve that is described in the OTA API Reference as an example of the ReqFactory object ("Find a specified requirement in a specified folder"). Or it is posted in this forum. If you know the ID of the start-requirement you can simply get the requirement with req_factory.Item(id).
When you have your requirement where you want to start, you can use the Find-method of the ReqFactory to get all its children, resp. all Requirement objects starting with the same path as the start-requirement. Here is an example-method in Ruby:
def list_all_child_requirements(start_req)
req_factory = #tdc.ReqFactory
req_path_strange_format = start_req.Field("RQ_REQ_PATH")
child_req_list = req_factory.Find(start_req.ID, "RQ_REQ_PATH", req_path_strange_format, 8)
child_req_list.each do |list_req|
puts list_req
end
end
The req_path_strange_format contains a String in the strange Quality Center notation like "AAAAAB". The Find-method starts from the start-requirement and searches all requirements which path starts with the same path as the path of the start-requirement. The parameter 8 means "starts with pattern" (described in the API Reference, Enum tagTDAPI_REQMODE). I just don't know how to access the Enum using Ruby, thats why the magic 8 is used... The Find-method returns a list with format "ID,NAME". From there it should be no problem to extract the requirements.
Doing the same directly in QC with a VAPI-XP-TEST and VB looks like that:
TDOutput.Clear
Dim reqPathStrangeFormat
Set reqF = tdConnection.ReqFactory
Set startReq = reqF.Item(14) ' ID of parent requirement
reqPathStrangeFormat = startReq.Field("RQ_REQ_PATH")
TDOutput.Print reqPathStrangeFormat
Set childReqList = reqF.Find(startReq.ID, "RQ_REQ_PATH", reqPathStrangeFormat, TDREQMODE_FIND_START_WITH)
For Each childReq in childReqList
TDOutput.Print childReq
Next
This code first prints some strange string "AAAAAB" or something similiar, then a list with "ID,NAME" of the requirements.

Attempting to deploy a binary to a location where a different binary is already stored

When I am publishing my page from tridio 2009, I am getting the error below:
Destination with name 'FTP=[Host=servername, Location=\RET, Password=******, Port=21, UserName=retftp]' reported the following failure:
A processing error occurred processing a transport package Attempting to deploy a binary [Binary id=tcm:553-974947-16 variantId= sg= path=/Images/image_thumbnail01.jpg] to a location where a different binary is already stored Existing binary: tcd:pub[553]/binarymeta[974950]
Below is my code snippet
Component bigImageComp = th.GetComponentValue("bigimage", imageMetaFields);
string bigImagefileName = string.Empty;
string bigImagePath = string.Empty;
bigImagefileName = bigImageComp.BinaryContent.Filename;
bigImagePath = m_Engine.AddBinary(bigImageComp.Id, TcmUri.UriNull, null, bigImageComp.BinaryContent.GetByteArray(), Path.GetFileName(bigImagefileName));
imageBigNode.InnerText = bigImagePath;
Please suggest
Chris Summers addressed this on his blog. Have a read of the article - http://www.urbancherry.net/blogengine/post/2010/02/09/Unique-binary-filenames-for-SDL-Tridion-Multimedia-Components.aspx
Generally in Tridion Content Delivery we can only keep one version of a Component. To get multiple "versions" of a MMC we have to publish MMC as variants. By this way we can produce as many variants as we need via templating.
You can refer below article for more detail:
http://yatb.mitza.net/2012/03/publishing-images-as-variants.html#!/2012/03/publishing-images-as-variants.html
When adding binaries you must ensure that the file and it's metadata is unique. If one of the values e.g. the filename appears to be the same but the rest of the metadata does not match, then deployment will fail.
In the given example (as Nuno points out) the binary 910 is trying to deploy over binary 703. The filename is the same but the binary is identified to be not the same (in the case a different ID from the same publication). For this example you will need to rename one of the binaries (either the file itself or change the path) and everything will be fine.
Other scenarios can be that the same image is used from two different templates and the template id is used as the varient ID. If this is the case it is the same image BUT the varient ID check fails so to avoid overwriting the same image the deployer fails it.
Often unpublishing can help, however, the image is only removed when ALL references to it are removed. So if it is used from more than one place there are more open references.
This is logical protection from the deployer. You would not want the wrong image replacing another and either upsetting the layout or potentially changing the content to another meeting (think advertising banner).
This is actual cause and reason for above problem (Something got from forum)

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

Resources