MPXJ: Cant read field values when custom field headers are present in file - ms-project

I'm using MPXJ to read Microsoft Project files (.mpp), but the current version of MPXJ (7.2.1) seems to refuse to read Lookup fields when I change a field to have a different name.
For example, create a new project, show Text1, set it to Lookup, add 0 and 1 to the Lookup list, and rename Text1 to anything ('Test'). Now generate a task and set it's Text1 (Test) to 1.
Now you can't read the 1 in Text1 (task.getText(1) returns null).
We parse many files, some of them are quite large, so it's not possible for me to continually modify our customers' headers, read them in, and then change them back. Further, I don't see any way to modify them programatically.
Has anyone else found this issue? Does anyone know how to workaround/fix this behavior?

This is a bug in MPXJ. I will be committing a fix to the MPXJ repo shortly.

Related

Cannot place custom items on client hotbar/toolbar

Whenever I make a custom item, I am unable to drag it onto a hotbar. When I try to pick it up the icon turns to a question mark and will not stick to a hotkey.
For example, I made an exact copy of the Murloc Costume (id 33079) at id 50017 (which was a free slot on my DB). The original I can put on a hotbar. The custom one I cannot.
Here's a gif of the issue
Answer: if it's not in the client side DBC's, it will not work properly.
From ReynoldsCahoon on the AC Discord:
I just modified a DBC recently. I had to use a tool (I used Ladik's MPQ Editor) to extract the specific DBC I wanted to modify, and then I used WDBX to open the DBC and manipulate it. In WDBX you can output it in a variety of formats (like CSV or SQL) so you can modify the values any way you like, and then reimport (via CSV or SQL) the values back in.
I loosely followed the guide here: https://model-changing.net/tutorials/article/23-41-creating-your-first-mpq-patch/
I exported a DBC containing all of the Character Titles in the game. Erased all of them, and then imported a bunch of new values of my own choosing. Instead of importing that back into the MPQ I got it from, I created a new MPQ called patch-4.mpq that I placed in my client WotLK/Data/ directory, and then on the server, I placed the DBC file into the worldserver/data/dbc/ directory, replacing the original DBC.
I think this can optionally be overwritten in the database, by using the associated _dbc table to override values from the dbc files (someone correct me if this part is wrong).

Why is object in AOT marked with red cross?

I have to extend report's query to add a new field.
I've created extension of a query, joined required datasources and can see new field in the list of fields.
For some reason the report in the AOT is displaying with red cross sign:
In properties i can see error in metadata: "There was an error reading metadata. Make sure the metadata xml file(s) are accessible, are well formed and are not corrupted with duplicate or missing xml elements.
The exception message is: Element named: 'Copy1' of type 'ModelElement' already exists among elements: 'Copy1'.
Parameter name: item
Additional information:
AOT/Reports/Reports/WHSInvent"
There is an .xml of that object in packages local directory, there are no any duplicate names in any node of that report.
Any ideas how it can be fixed?
I've run into this before and there are a two things that come to mind.
Often times it's due to an incorrect merge where changes are merged and metadata is accidentally duplicated (in your case it's possible there are two xml nodes with the same name/id in the .rdl file)
If this report is checked in with corrupt metadata, you need to manually modify the RDL file, which is not great, but hopefully the error contains enough hints. Open the report rdl file in your favourite editor (report likely located in a similar path as this: K:\AosService\PackagesLocalDirectory\YOURMODEL\Reports) and look for an xml node with an attribute Name="Copy1". With luck, you have two duplicate nodes next to each other due to the merge. Remove the offending duplicate node, save, and refresh the AOT in Visual Studio.
If the error is in your local changes only (xml file is corrupted for whatever reason) and you are sure that your source control contains the correct version and you simply wish to overwrite the local contents with the source controlled version, follow these steps. Note: this will overwrite local changes.
First, undo pending changes.
Then force a get latest:

How do I fetch text blurbs using open refine 2.6?

I am running a very simple exercise where I have a list of people's names that have been already reconciled via freebase, from within Open Refine.
The Github repository for Open Refine clearly indicates that fetching Properties against a reconciled Freebase Type is still a "To Do" project, but apparently, fetching text blurbs is possible..
Starting from your reconciled column, use "Add column from Freebase"
and use the "Add a property" field at the top to add
/common/topic/article. - using the new column, select "Add a column by
fetching a URL" and construct the URL as follows:
"http://api.freebase.com/api/trans/raw"+value - You'll end up with
another new column containing the text of all the blurbs
I don't have an option "Add column from Freebase".
Am I missing something? Thank you very much for your help.
That's presumably OpenRefine 2.6 beta since the production release isn't out yet [emphasis added].
If Add from Freebase is missing, something is wrong (ie it's a bug). Please file a bug report on GitHub with information on what operating system you're using. The documentation that you quoted is also out of date. Please also file a bug report for it with the location you found it.
Sorry about the problem! We'll have a look as soon as we've got the information from the bug report(s).

Attempting to deploy a binary to a location where a different binary is already stored

When I am publishing my page from tridio 2009, I am getting the error below:
Destination with name 'FTP=[Host=servername, Location=\RET, Password=******, Port=21, UserName=retftp]' reported the following failure:
A processing error occurred processing a transport package Attempting to deploy a binary [Binary id=tcm:553-974947-16 variantId= sg= path=/Images/image_thumbnail01.jpg] to a location where a different binary is already stored Existing binary: tcd:pub[553]/binarymeta[974950]
Below is my code snippet
Component bigImageComp = th.GetComponentValue("bigimage", imageMetaFields);
string bigImagefileName = string.Empty;
string bigImagePath = string.Empty;
bigImagefileName = bigImageComp.BinaryContent.Filename;
bigImagePath = m_Engine.AddBinary(bigImageComp.Id, TcmUri.UriNull, null, bigImageComp.BinaryContent.GetByteArray(), Path.GetFileName(bigImagefileName));
imageBigNode.InnerText = bigImagePath;
Please suggest
Chris Summers addressed this on his blog. Have a read of the article - http://www.urbancherry.net/blogengine/post/2010/02/09/Unique-binary-filenames-for-SDL-Tridion-Multimedia-Components.aspx
Generally in Tridion Content Delivery we can only keep one version of a Component. To get multiple "versions" of a MMC we have to publish MMC as variants. By this way we can produce as many variants as we need via templating.
You can refer below article for more detail:
http://yatb.mitza.net/2012/03/publishing-images-as-variants.html#!/2012/03/publishing-images-as-variants.html
When adding binaries you must ensure that the file and it's metadata is unique. If one of the values e.g. the filename appears to be the same but the rest of the metadata does not match, then deployment will fail.
In the given example (as Nuno points out) the binary 910 is trying to deploy over binary 703. The filename is the same but the binary is identified to be not the same (in the case a different ID from the same publication). For this example you will need to rename one of the binaries (either the file itself or change the path) and everything will be fine.
Other scenarios can be that the same image is used from two different templates and the template id is used as the varient ID. If this is the case it is the same image BUT the varient ID check fails so to avoid overwriting the same image the deployer fails it.
Often unpublishing can help, however, the image is only removed when ALL references to it are removed. So if it is used from more than one place there are more open references.
This is logical protection from the deployer. You would not want the wrong image replacing another and either upsetting the layout or potentially changing the content to another meeting (think advertising banner).
This is actual cause and reason for above problem (Something got from forum)

I'm having problems with configuring a filter that replicates specific tables only

I am trying to use filters to select specific tables to replicate.
I tried running this with the installer
./tools/tungsten-installer --master-slave -a \
...
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
and got this exception in trepctl status after the master had not installed properly:
Plugin class name property is missing or null: key=replicator.filter.replicate
which file is this properties file? How do I find it? Moreover, in specifying the settings for the filter, how do I know what exactly to put?
I discovered that I am supposed to Modify the configuration template file prior to configuration according to Issue 219 but what changes am I supposed to make in tungsten-replicator-2.0.5-diff that will later on be patched to the extraction?
Issue 254 suggests that If you want to apply a filter out of the box, you can use these options with tungsten-installer:
-a --property=replicator.filter.Replicate.ignoreFilter=schema_x.tablex,schema_x,tabley,schema_y,tablez
--svc-thl-filter=Replicate
However when I try using this for --property=replicator.filter.replicate.do,
but the problem is still the same:
pendingExceptionMessage: Plugin class name property is missing or null: key=replicator.filter.replicate
Your assistance will be greatly appreciated.
Rumbi
Update:
Hi
I had a look at this file: /root/tungsten/tungsten-replicator/samples/
conf/filters/default/tableignore.tpl .Acoording to this sample, a
static-SERVICE_NAME.properties file is supposed to have something like
this configured, please confirm if this is the correct syntax:
replicator.filter.tabledo=com.continuent.tungsten.replicator.filter.JavaScr iptFilter
replicator.filter.tabledo.script=${replicator.home.dir}/samples/
scripts/javascript-advanced/tabledo.js
replicator.filter.tabledo.tables=foo(database).bar(table)
replicator.stage.thl-to-dbms.filters=tabledo
However, I did not find tabledo.js (or something similar) in the
directory where tableignore.js exists. Could I please have the
location of this file. If there is an alternative way of specifiying
--property=replicator.filter.replicate.do=test without the use of
this .js file, your suggestions are most welcome.
Download the latest version of tungsten replicator. The missing tpl file was added about a month ago. After installation, the filtered tables should be added to static-service.properties under the section FILTERS.
Locate your replicator configuration file in static-YOUR_SERVICE_NAME.properties, e.g.
/opt/continuent/tungsten/tungsten-replicator/conf/static-mysql2vertica.properties
Make sure the individual dbms properties are set, in particular the setting replicator.applier.dbms:
# Batch applier basic configuration information.
replicator.applier.dbms=com.continuent.tungsten.replicator.applier.batch.SimpleBatchApplier
replicator.applier.dbms.url=jdbc:mysql:thin://${replicator.global.db.host}:${replicator.global.db.port}/tungsten_${service.name}?createDB=true
replicator.applier.dbms.driver=org.drizzle.jdbc.DrizzleDriver
replicator.applier.dbms.user=${replicator.global.db.user}
replicator.applier.dbms.password=${replicator.global.db.password}
replicator.applier.dbms.startupScript=${replicator.home.dir}/samples/scripts/batch/mysql-connect.sql
# Timezone and character set.
replicator.applier.dbms.timezone=GMT+0:00
replicator.applier.dbms.charset=UTF-8
# Parameters for loading and merging via stage tables.
replicator.applier.dbms.stageTablePrefix=stage_xxx_
replicator.applier.dbms.stageDirectory=/tmp/staging
replicator.applier.dbms.stageLoadScript=${replicator.home.dir}/samples/scripts/batch/mysql-load.sql
replicator.applier.dbms.stageMergeScript=${replicator.home.dir}/samples/scripts/batch/mysql-merge.sql
replicator.applier.dbms.cleanUpFiles=false
Depending on the database you are replicating to you may have to omit/modify some of the lines.
For more information see:
https://code.google.com/p/tungsten-replicator/wiki/Replicator_Batch_Loading
I don't know if this problem is still open or not.
I am using this version 2.0.6-xxx and installing the service using the parameters works for me.
I would like to point it out, that as the parameter says "--svc-extractor-filters" defines an extractor filter. Meaning that the parameters will guide the extraction of data in the master server.
If you intend to use it on the slave service, you should use the "--svc-applier-filters".
The parameters
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=test,*.foo"
supposed to create the following in the properties file:
This is the filter set up.
replicator.filter.replicate=com.continuent.tungsten.replicator.filter.ReplicateFilter
replicator.filter.replicate.ignore=
replicator.filter.replicate.do=test,*.foo
And you should also be able to find the
replicator.stage.binlog-to-q.filters=replicate
parameter set.
If you intend to use this filter in the slave, please find the line with:
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave
and change it as
replicator.stage.q-to-dbms.filters=mysqlsessions,pkey,bidiSlave,replicate
Hope this brief description did help to you!

Resources