HFM Extract data to file is resulting in Parent Child error - flat-file

I am trying to do a simple extract in HFM and I keep getting this error.
"Parent-Child name must be specified for Entity dimension due to value: [Parent Total]
Member found that requires parent name:"
All the members in the extract are valid members, this was verified using a smart view extract
The entity dimension does contain alternate hierarchies, i.e. the member in my extract does have two parents
This extract runs successfully when the expand only selection is selected
This is being done via Workiva-WData
The integration user has this level of access
Any help would be much appreciated.
I have tried running almost every combination of member and expand function available, BASE, ALLMEMBERS, IPARENTS, all result in that error.

Related

XML for Analysis (XML/A) format of member names?

I have two different XML/A providers, Mondrian and icCube. The tuples for a time dimension contain the unique name for the member, but the format of the member name is different:
Mondrian:
<UName>[Time].[2004].[QTR2].[Apr]</UName>
<Caption>Apr</Caption>
[Time] is the name of the hierarchy
[2004] is the name of the ancestor at the Year level
[QTR2] is the name of the ancestor at the Quarter level
[Apr] is the local name of the member at the Month level
icCube:
<UName>[Time].[Calendar].[Month].&[Jun 2010]</UName>
<Caption>Jun 2010</Caption>
[Time] is the name of the dimension
[Calendar] is the the name of the
hierarchy
[Month] is the name of the level
[Jun 2010] is the name of
the month member.
(I don't know why the ampersands are there)
My question is, is there any recommended, preferably standard way to figure out how the member names are formatted?
The reason I want to know this is when I render the result in a Pivot table, the captions for the members will usually end up as labels on the headers of the pivot table. But since the captions may not be unique, it is desirable to also produce labels of the "ancestor" members, because together they do identify the member uniquely.
In my example, I could use the parts of the member unique name to do this, but in ic cube not,since the member u name is structured differently.
I have 2 questions:
1) How can I tell beforehand what format the XML/A provider will use to identify the members?
2) What would be the recommended way in ic cube to produce the labels for the ancestor members?
UPDATE:
Luc Boudreau informed me that the ampersand indicates "key notation" - it designates the member key rather than its name. Thanks Luc!
The meaning of unique names in MDX is a string that guarantees that it defines a unique MDX entity when parsed. There is no possible collision with another MDX entity. The way to write it depends on the XMLA provider. Even though it's 'unique' there are multiple ways creating it, each server chooses its way.
Never mind, a query written in one server will work in another as both 'unique' names are correctly parsed.
& amp; stands for &
Our advise, the client code should not rely on the format of the unique names.
That being said, if you need parent "names", you should retrieve them explicitly using the "Parent" function and/or as a calculated measure retrieving the name/caption property.
Hope that helps.

What does Managed="0" in List view XML mean?

I've written a Data Extender class and editor extension that properly displays a few additional columns for items as you browse lists in the CME (folders and structure groups). I had to register my class to handle commands like GetList, GetListSearch, GetListUserFavorites, and GetListCheckedOutItems.
What I've noticed is that the code gets run even when a list of say, schemas is loaded for a drop-down list in the CME (like when creating a new component and you get the list of schemas in a drop-down). so, even though my additional data columns aren't needed in that situation, the code is still being executed and it slows things down.
It seems that it's the GetList command called in those situations. So, I can't just skip processing based on the command. So, I started looking at the XML that the class receives for the list and I've noticed when the code is run for the drop-downs, there's a Managed="0" in the XML. For example:
For a Structure Group list: <tcm:ListItems Managed="64" ID="tcm:103-546-4">
For a Folder list: <tcm:ListItems Managed="16" ID="tcm:103-411-2">
But for a Schema list: <tcm:ListItems ID="tcm:0-103-1" Managed="0">
For a drop-down showing keyword values for a category: <tcm:ListItems Managed="0" ID="tcm:103-506-512">
So, can I just use this Managed="0" as a flag to indicate that the list being processed isn't going to show my additional columns and I can just quit processing?
Managed value is representation of what items can be created inside OrganizationItem:
64 means you can create pages
16 means you can create components
10, for example would mean you can create folders (2) + schemas (8)
518 - folders (2) + structure groups (4) + categories (512)
The value is 0 for non organizational items.
Value depends on the item itself (you can't create pages in folder, for example), as well as on security settings you have on publication and organizational item
Unfortunately CME can't offer right now that kind of granularity level to allow you to tell in a data extender where a particular WCF API call is coming from. Our WCF API is not context aware yet. It may change in the future.
Trusting Managed="0" is not a great idea.
The reason for that is the model lists are client cached per filter. In the current design the filter has CM related data and nothing related to the context the request is being fired from.
Typically the client user interface is reusing cached model data whenever is possible. For instance the same model list could be used in the CME dashboard and a drop down control placed into some item view, but with different xml list definitions: the first one will have more columns defined in the list definition than the latter. They are basically different views of the same data.
Therefore you may want to think of different solutions for your problem.
Now... where is the data behind those additional columns is coming from? Is it Tridion CM or a third party provider?
Sometimes the web server caching may provide an acceptable way to improve the response times. But that's the kind of design you should evaluate and decide upon.
I think you would have a more robust solution if you read the ID of the list, and only execute your code for lists of type 2 and 4 (Folders and Structure Groups respectively). but that won't help you with search views etc.
From previous experience and what User978511 says the Managed attribute is an indication of item types that can be created from the context of that list.
Unfortunately that means that the Managed attribute may well be 0 for any user that doesn't have sufficient rights to create items. E.g. check what Managed is in a Structure Group for a user that isn't allowed to create Pages or Structure Groups. It may well be 0 in that case too, meaning it is useless for your situation.
Update
You may be able to reach your goal better by looking at the columns parameter:
context.Parameters["columns"]
In a few tests I've run I get different values, depending on whether I get a list for the main list view, the tree or a drop down list.
543
23
7
Those values are a bit mask of these constants (from Constants.js):
/**
* Defines the column filter.
* Used to specify which attributes should be included in XML list data.
* #enum
*/
Tridion.Constants.ColumnFilter =
{
ID: 1,
ID_AND_TITLE: 3,
DEFAULT: 7,
EXTENDED: 15,
ALLOWED_ACTIONS: 16,
VERSIONS: 32,
INTERNALS: 64,
URL: 128,
XML_NAME: 256,
CHECK_OUT_USER: 512,
PUBTITLE_AND_ITEM_PATH: 1024
};
So from my limited testing it seems that drop downs request DEFAULT columns, while the main list view and the tree both have ALLOWED_ACTIONS in there. This makes sense to me, since the user gets can interact with the list items in the tree and list view, while they can only select them in the drop downs. So checking for the presence of ALLOWED_ACTIONS in the columns parameter might be one way to reduce the number of places where your data extender adds information.

dynamodb creating a string set

I have a lot of objects with unique IDs. Every object can have several labels associated to it, like this:
123: ['a', 'hello']
456: ['dsajdaskldjs']
789: (no labels associated yet)
I'm not planning to store all objects in DynamoDB, only these sets of labels. So it would make sense to add labels like that:
find a record with (id = needed_id)
if there is one, and it has a set named label_set, add a label to this set
if there is no record with such id, or the existing record doesn't have an attribute named label_set, create a record and an attribute, and initialize the attribute with a set consisting of the label
if I used sets of numbers, I could use just ADD operation of UPDATE command. This command does exactly what I described. However, this does not work with sets of strings:
If no item matches the specified primary key:
ADD— Creates an item with supplied primary key and number (or set of numbers) for the attribute value. Not valid for a string type.
so I have to use a PUT operation with Expected set to {"label_set":{"Exists":false}}, followed (in case it fails) by an ADD operation. These are two operations, and it kinda sucks (since you pay per operation, the costs of this will be 2 times more than they could be).
This limitations seems really weird to me. Why are something what works with numbers sets would not work with string sets? Maybe I'm doing something wrong.
Using many records like (123, 'a'), (123, 'hello') instead of one record per object with a set is not a solutions: I want to get all the values from the set at once, without any scans.
I use string sets from the Java SDK the way you describe all the time and it works for me. Perhaps it has changed? I basically follow the pattern in this doc:
http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/API_UpdateItem.html
ADD— Only use the add action for numbers or if the target attribute is
a set (including string sets). ADD does not work if the target
attribute is a single string value or a scalar binary value. The
specified value is added to a numeric value (incrementing or
decrementing the existing numeric value) or added as an additional
value in a string set. If a set of values is specified, the values are
added to the existing set. For example if the original set is [1,2]
and supplied value is [3], then after the add operation the set is
[1,2,3], not [4,5]. An error occurs if an Add action is specified for
a set attribute and the attribute type specified does not match the
existing set type.
If you use ADD for an attribute that does not exist, the attribute and
its values are added to the item.
When your set is empty, it means the attribute isn't present. You can still ADD to it. In fact, a pattern that I've found useful is to simply ADD without even checking for the item. If it doesn't exist, it will create a new item using the specified key and create the attribute set with the value(s) I am adding. If the item exists but the attribute doesn't, it creates the attribute set and adds the value(s). If they both exist, it just adds the value(s).
The only piece that caught me up at first was that the value I had to add was a SS (String set) even if it was only one string value. From DynamoDB's perspective, you are always merging sets, even if the existing set is an empty set (missing) or the new set only contains one value.
IMO, from the way you've described your intent, you would be better off not specifying an existing condition at all. You are having to do two steps because you are enforcing two different situations but you are trying to perform the same action in both. So might as well just blindly add the label and let DynamoDB handle the rest.
Maybe you could: (pseudo code)
try:
add_with_update_item(hash_key=42, "label")
except:
element = new Element(hash_key=42, labels=["label"])
element.save()
With this graceful recovery approach, you need 1 call in the general case, 2 otherwise.
You are unable to use sets to do what you want because Dynamo Db doesn't support empty sets. I would suggest just using a string with a custom schema and building the set from that yourself.
To avoid two operations, you can add a "ConditionExpression" to your item.
For example, add this field/value to your item:
"ConditionExpression": "attribute_not_exists(RecordID) and attribute_not_exists(label_set)"
Source documentation.
Edit: I found a really good guide about how to use the conditional statements

Qt error "persistent model indexes corrupted" why?

I've a problem with my Qt/interview application. I use QTreeView to display tree data. I implemented my own model based on QAbstractItemModel.
I get a following error prior to application crash. It happens often after I add new record.
Could You explain to me what is the meaning of this error. What is a QPersistentModelIndex ?
I'm not using QPersistentModelIndex in my code.
ASSERT failure in QPersistentModelIndex::~QPersistentModelIndex: "persistent model indexes corrupted"
Thanks.
QPersistentModelIndexes are (row, column, parent) references to items that are automatically updated when the referenced items are moved inside the model, unlike regular QModelIndex. For instance, if you insert one row, all existing persistent indexes positioned below the insertion point will have their row property incremented by one.
You may not use them directly, but QTreeView does, to keep track of expanded items and selected items, for example.
And for these persistent indexes to be updated, you have to call the functions QAbstractitemModel::beginInsertRows() and endInsertRows() around the actual row insertion(s) when you add new records.
See the end of the section about subclassing model classes for details: http://doc.trolltech.com/latest/qabstractitemmodel.html#subclassing
I found this method QAbstractItemModel::persistentIndexList and I'm
wondering what indexes it should return. All of them ?
Should this method return all nodes currently visible in the TreeView ?
That method returns only the indexes for which a QPersistentIndexModel was created and is still in scope (as a local variable, a class member, or in a QList<QPersistentIndexModel> for example).
Expanded or selected nodes are not necessarily currently visible, so you can't (and shouldn't anyway) assume anything about what these persistent indexes are used for.
You just have to keep them updated, and you only need to use persistentIndexList for big changes in the model, like sorting (see QTreeWidget internal model : QTreeModel::ensureSorted(link)), for smaller incremental changes you have all the beginXxxRows/beginXxxColumns and endXxxRows/endXxxColumns methods.

How to set a default value to an destination schema element in BizTalk Map

I have a requirement in BizTalk map, where
I will map some elements from source schema to destination schema,where the values will be assigned to destination schema elements based on some condition.
If those values are not assigned, i need to send some default value (N/A).
My map is not One-to-One so that i can use a scripting functoid and send a default value, on top of that the destination schema is a flat file and in source schema i have to loop a lot.
So can any body give me some suggestion about "How to set a Default value to a Element in Destination schema if nothing is mapped" using BizTalk Map/some setting in schema.
What I have already tried is, I Opened the destination schema for all the elements I have set the value 'N/A' to a property -> "DefaultValue" which was there in the property tab but when nothing is mapped the default value is not coming. Instead the node itself is not created in the Output file.
Please see the Map below for a good understanding
alt text http://www.biztalkgurus.com/cfs-filesystemfile.ashx/__key/CommunityServer.Discussions.Components.Files/13/0131.problem.JPG
Source Schema is a XML schema.
Destination Schema is a Flat file schema.
Now in the above map, in my source schema I am having a node called F4706 which will loop.
When the element "TypeAddressNumber" within the F4706 is "1", then I am mapping the remaining fields of that F4706 instance to "ship to" details in my destination schema
When the element "TypeAddressNumber" within the F4706 is "2",then I am mapping the remaining fields of that F4706 instance to "Reseller" details in my destination schema
When the element "TypeAddressNumber" within the F4706 is "3",then I am mapping the remaining fields of that F4706 instance to "EndUser" details in my destination schema
Now if I connect a Logical NOT functoid to the Logical Equal functoid and assign some default value, then the my destination node occurs Three times as one time the "=" functiod returns true one time and false other two times. But what I want is, if anything is there to map then map from "F4706" instance or assign the default value.
Find the INPUT File below
alt text http://www.biztalkgurus.com/cfs-filesystemfile.ashx/__key/CommunityServer.Discussions.Components.Files/13/5430.ip.JPG
The output I'm expecting and getting is :
alt text http://www.biztalkgurus.com/cfs-filesystemfile.ashx/__key/CommunityServer.Discussions.Components.Files/13/0724.curOP.JPG
Now if the Input file is like below :
alt text http://www.biztalkgurus.com/cfs-filesystemfile.ashx/__key/CommunityServer.Discussions.Components.Files/13/6403.otherIP.JPG
That is when I don't have a "F4706" node with TypeAddressNumber=2, I need to fill "N/A" in Reseller related nodes in my destination schema, which should look like below :
alt text http://www.biztalkgurus.com/cfs-filesystemfile.ashx/__key/CommunityServer.Discussions.Components.Files/13/0435.nextOP.JPG
If you go and check the XLST which is getting generated, it is writing a xsl:foreach so if you use xsl:choose/otherwise conditions gets checked multiple times and my output nodes gets duplicated.
I also tried to use some global variable in XLST in First Loop and and second loop to access that and write the default value, unfortunately it doesn't work too. Because a VARIABLE in XLST is not a TRUE variable. I think its a CONSTANT.
How to accomplish this ANY help is highly appreciated.
Put two "Value mapping" (Label them "Incoming" and "Default") on the map and drag the output from both to your destination (you will get a warning at compile time).
Put a "Logical NOT" on the map (Label it "NoValue").
Put a logical evaluation (Existence, IsNil, Length) that suits your need, to evaluate if you have an incoming value, and drag your source field to it. (Label it "HasValue")
Drag the result to the "Incoming" and the "Logical NOT".
Drag your source field to the "Incoming".
Drag the output from "NoValue" to "Default".
Add a constant parameter to "Default", by double clicking and insert new parameter, that is your default value.
Hope you understand this mess :)
I believe you are essentially trying to control the creation of an output node based on some condition.
I have tried this for records (you are trying to do this for elements, so I believe this should work for that as well).
I had connected the output of the Logical functoid to the record and the record was created only if the logical functoid returned true.
For default values, you are doing it the right way by putting the default value in the property grid for Schema element. So if nothing is mapped to this element, you will see in the xsl file that the element with the default value is generated.

Resources