How to add soft attributes(by loading) to WTPart? - ptc-windchill

In Windchill 10.2,how to add global soft attributes to WTPart? I mean by load file.
windchill wt.load.LoadFromFile -d update-wtpart.xml
I want to know how to write that update-wtpart.xml file.

I assume you must have got a solution to your problem, but incase not then i am assuming you have a WTPart in Windchill and IBA as well you just need to add that IBA to WTPart.
You must use
wt.part.LoadPart.cachePart loadfile
You will find one example provided by PTC in loadFiles folder.

Related

tFlowToIterate with BigDataBatch job

I have currently a simple standard job on Talend doing this :
It simply reads a file of several lines (tHDFSInput), and for each line of this file (tFlowToIterate), I create a INSERT query "INSERT ... SELECT ... FROM" based on what I read in my file (tHiveRow). And it works well, it's just a bit slow.
I now need to modify my "Standard" job to make a "Big Data Batch" job in order to make it faster, and also because we asked me to only make Big Data Batch from now on.
The thing is that there is no tFlowToIterate and no tHiveRow component with Big Data Batch...
How can I do this ?
Thank's a lot.
Though I haven't tried this solution, I think this can help you.
Create hive table upfront.
Place tHDFSConfiguration component in the job and provide cluster details.
Use tFileInputDelimited component. On providing storage configuration as tHDFSConfiguration(defined in step1), this will read from HDFS.
Use tHiveOutput component. Connect tFileInputDelimited to tHiveOutput. In tHiveOutput, you can provide the table, format and save mode.
In order to load HDFS into Hive without modification on the data maybe you can use only one component : tHiveLoad
Insert your HDFS path inside the component.
tHiveLoad documentation : https://help.talend.com/reader/hCrOzogIwKfuR3mPf~LydA/ILvaWaTQF60ovIN6jpZpzg

Pyvmomi - Assign VM to specific folder with non-unique name

I'm trying to figure out how to assign a VM to a folder that does not contain a unique name. I'm currently testing with the clone_vm.py template. With the sample, I have the ability to set the folder, but it does not work correctly if there's nested folders with the same name (example below). I would like to make sure the folder assigned is the "Linux/Dev" folder, but I can only pass "Dev" and hope that it picks the right one. The line of code below is how the folder is being set.
destfolder = get_obj(content, [vim.Folder], vm_folder)
Linux
|------Dev
|------Prod
Windows
|------Dev
|------Prod
Thanks!
The best way to do that is to use a search_index.FindByInventoryPath and get the folder by the path. It can be a little confusing because of hidden folders but the MOB can help you. I answered a question where I covered how to use that search method see this answer.

How to change reports or output saving location in robot framework from RIDE

When I run test cases from RIDE the reports are saved in the below path.
C:\Windows\Temp\RIDExf4xla.d
I want save reports in specific path. Can I do this from RIDE? Is there any setting to change the reports location?
Can anyone please suggest the way to do it.
Thanks
Look at the --outputdir command within the Robot Framework Documentation:
Here is what I use:
--outputdir C:/Robot/AutomationLogs/etc/etc --timestampoutputs
You use this one liner on the "Arguments" Field, right on the top of RIDE within the run tab.
From Wamans comment you can add formats to the end of the argument, to also change the dir name dynamically. See the 2nd answer within that SO question. This should be enough for you to get what you're asking for.
There is no way to set this within a UI.
Just set it by pasting that argument option within the "Arguments" Field at the top.
use below code in command line
C:\Tests\> robot -d C:\Test_results Test.robot

Python-Firebase How to Access Main Directory using Put and Post

I'm working with python-firebase. I've been trying to access the root folder, but I've been unable to do so with the post and put commands. For example, I need to do:
database.post (root, {"HELLO" : "HELLO"})
but the best I can do is two directories down. Can anyone help?
Check out this post on getting started with Firebase. The example uses a put.
In short, you can simply do:
firebase.put('/test', 'testing/tester/test', putData)
Each / defines another route down. So, in the above example, that is 3 levels down from the root. Just keep adding / the further you want to go.

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

Resources