In Informatica analyst tool - rules

How rule can be " Atleast one must be passed either legal_name or trading_name and it should not be null"
IIF(ISNULL(LEGAL_NAME OR TRADING_NAME),'FAIL','PASS'),'PASS')
this is not working.

Related

How to avoid "Multiple properties exist for the provided key, use Vertex.properties(name)"?

How to avoid "Multiple properties exist for the provided key, use Vertex.properties(name)" when the property has multiple values.
Vertex has a property called name and it has multiple values.
How to get anyone value even though it has multiple values
%%gremlin
g.V('d65135a3-8cd3-4edd-bc8d-f7087557e2a9').
project('s','s1').
by(values('name')).
by(outE('owns').inV().hasLabel('xy').elementMap())
Error:
{
"detailedMessage": "Multiple properties exist for the provided key, use Vertex.properties(name)",
"requestId": "71391776-ad7f-454d-8413-3032a9800211",
"code": "InternalFailureException"
}
I tried reproducing your issue using this sample graph:
g.addV('set-test').
property('mySet','one').
property(set, 'mySet','two').
property(id,'set-test1')
but I was able to return properties OK.
g.V('set-test1').
project('s').
by(values('mySet'))
{'s': 'one'}
and to get every member of the set:
g.V('set-test1').
project('s','s2').
by(values('mySet').fold())
{'s': ['one', 'two'], 's2': ['one', 'two']}
However, I was able to reproduce the message by doing this:
g.V('set-test1').
project('s1','s2').
by(values('mySet'))
{
"detailedMessage": "Multiple properties exist for the provided key, use Vertex.properties(mySet)",
"requestId": "04e43bad-173c-454b-bf3c-5a59a3867ef6",
"code": "InternalFailureException"
}
Please note however, that Neptune is showing the same behavior in this case that you would see from Apache TinkerPop's TinkerGraph, so using fold is probably the way to go here as it will allow the query to complete successfully.
As a side note, "multi property" values (such as sets) do allocate an ID to each set member. However, this is highly implementation dependent, and I would not rely on these IDs. For example, Neptune does not persist property IDs in the database. They are generated "just in time" and can change. For completeness though, here is an example of using property ID values:
g.V('set-test1').properties('mySet').id()
1002125571
1002283485
We can the use those ID values in a query such as:
g.V('set-test1').
project('s1').
by(properties('mySet').hasId('1002283485').value())
{'s1': 'two'}

BizTalk 2013 R2 - Decision Shape not following correct logic path

Issue with an old Biz app (not designed or developed by myself).
Its orchestration receives a particular message, which is then fed into a Decision shape. If the below logic applies (as directly copy/pasted from the shapes first branch expression), it should go that route.
msg_inputCanonical.CRUD == "D" && msg_inputCanonical.DbTable == "Staff"
I can see it terminating however (by using the Orchestration Debugger) as it follows the Else branch and eventually hits a terminate shape.
I've checked the msg_inputCanonical to confirm the values being passed through (as below extracted from the Tracked Message part), and can see it matches the string condition in accordance with its mapping - CRUD = ChOp;
<DbTable>Staff</DbTable>
<ChOp>D</ChOp>
There's nothing else that I can see that's influencing this re-route, so can anyone think of any quirks that might be causing it?
Note: I've amended the WCF-SQL Stored Procedure that generates the msg_inputCanonical as prior to this it wasn't trimming any of the CRUD/DbTable values and had been leaving deadspace in.
There is also a map that uses ltrim/rtrim functoids on the ChOp property, but again - can't see what harm trimming an already trim'd field would do.
I have also tried replicating the logic in a dev environment, and it works as expected going down the correct branch when I'm passing the message through.

Firebase database rules – `data.exists()` always seems to be true, possible bug?

I am trying to secure my firebase database to allow the creation of new records, but not allow the deletion of existing records. Ultimately, I plan to utilise Firebase authentication in my app as well, and allow users to update existing records if they are the author, but I am trying to get the simple case working first.
However! No matter what I try in the database rules simulator, despite what the documentation seems to suggest, the value of data.exists() seems to always be true. From what I what I can understand from the documentation, the variable data represents a record in the database as it did before an operation took-place. That is to say, for creates, data would not exist, and for updates/deletes, data would refer to a real record that exists in the database. This does not seem to be the case, to the point where I am actually suspecting a bug in Firebase, as when setting the following rules on my database, all write operations are disallowed:
{
"rules": {
".read": true,
".write": "!data.exists()"
}
}
No matter what values I put into the simulator, be it Location or Data. I have even written a small EmberJS app to verify if the Simulator is telling the truth and it too, is denied permission for all write operations.
I really have no idea where to go from here as I am pretty much out of things to try. I tried deleting all records from my database, which lets the simulator think it can perform write operations, but my test app still gets PERMISSION_DENIED, so I don't know what's causing inconsistencies there.
Is my understanding of the predefined data variable correct? If so, why can't I write the rules I want? I have seen snippets literally trying to achieve my "create only, no-delete" rule that seem to line up with my understanding.
Last note: I am trying this in a totally new Firebase project with JUST the rules above, and only ~a few records of junk data laying around my database.
Because you have placed the !data.exists() at the root location of your database, data refers to the entire database. You will only be able to write to the database when it is completely empty.
You indicate that you run your tests with only a few records of junk data laying around my database. Those records will cause data.exists() to be true.
You can achieve your goal by placing the !data.exists() rule in your tree at the specific location where you want to require that no data already exists. This is typically done at a location with a wildcard key, as in the example you linked:
{
"rules": {
// default rules are false if not specified
"posts": {
".read": true, // everyone can read all posts
"$postId": {
// a new post can be created if it does not exist
// existing posts can only be edited by their original "author"
".write": "!data.exists() && newData.exists() || data.child('author').val() == auth.uid",
".validate": "newData.hasChildren(['title', 'author', 'timestamp'])",
}
}
}
}

Having consistency during multi path updates when the paths are not deterministic and are variable

I need help in a scenario when we do multipath updates to a fan-out data. When we calculate the number of paths and then update, in between that, if a new path is added somewhere, the data would be inconsistent in the newly added path.
For example below is the data of blog posts. The posts can be tagged by multiple terms like “tag1”, “tag2”. In order to find how many posts are tagged with a specific tag I can fanout the posts data to the tags path path as well:
/posts/postid1:{“Title”:”Title 1”, “body”: “About Firebase”, “tags”: {“tag1:true, “tag2”: true}}
/tags/tag1/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
/tags/tag2/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
Now consider concurrently,
1a) that User1 wants to modify title of postid1 and he builds following multi-path update:
/posts/postid1/Title : “Title 1 modified”
/tags/tag1/postid1/Title : “Title 1 modified”
/tags/tag2/postid1/Title : “Title 1 modified”
1b) At the same time User2 wants to add tag3 to the postid1 and build following multi-path update:
/posts/postid1/tags : {“tag1:true, “tag2”: true, “tag3”: true}
/tags/tag3/postid1: {“Title”:”Title 1”, “body”: “About Firebase”}
So apparently both updates can succeed one after other and we can have tags/tag3/postid1 data out of sync as it has old title.
I can think of security rules to handle this but then not sure if this is correct or will work.
Like we can have updatedAt and lastUpdatedAt fields and we have check if we are updating our own version of post that we read:
posts":{
"$postid":{
".write":true,
".read":true,
".validate": "
newData.hasChildren(['userId', 'updatedAt', 'lastUpdated', 'Title']) && (
!data.exists() ||
data.child('updatedAt').val() === newData.child('lastUpdated').val())"
}
}
Also for tags we do not want to check that again and we can check if /tags/$tag/$postid/updatedAt is same as /posts/$postid/updatedAt.
"tags":{
"$tag":{
"$postid":{
".write":true,
".read":true,
".validate": "
newData.hasChildren(['userId', 'updatedAt', 'lastUpdated', 'Title']) && (
newData.child('updatedAt').val() === root.child('posts').child('$postid').val().child('updatedAt').val())”
}
}
}
By this “/posts/$postid” has concurrency control in it and users can write their own reads
Also /posts/$postid” becomes source of truth and rest other fan-out paths check if updatedAt fields matches with it the primary source of truth path.
Will this bring in consistency or there are still problems? Or can bring performance down when done at scale?
Are multi path updates and rules atomic together by that I mean a rule or both rules are evaluated separately in isolation for multi path updates like 1a and 1b above?
Unfortunately, Firebase does not provide any guarantees, or mechanisms, to provide the level of determinism you're looking for. I have had the best luck front-ending such updates with an API stack (GCF and Lambda are both very easy, server-less methods of doing this). The updates can be made in that layer, and even serialized if absolutely necessary. But there isn't a safe way to do this in Firebase itself.
There are numerous "hack" options you could apply. You could, for example, have a simple lock mechanism using a dedicated collection for tracking write locks. Clients could post to a lock collection, then verify that their key was the only member of that collection, before performing a write. But I hope you'll agree with me that such cooperative systems have too many potential edge cases, potential security issues, and so on. In Firebase, it is best to design such that this component is not a requirement in the first place.

How to create time-expiring data with Firebase Rules?

This talk mentions time-expiring data using Firebase rules at 22:55
https://www.youtube.com/watch?v=PUBnlbjZFAI
How can one do this ?
I didn't find any information regarding this.
I recommend two solutions.
1) Use cloud functions to record a message path and the date it was posted. Then every hour sort that list by date, pick all the expired ones, and create a deep update object to null out every expired message. Nowadays you can use Cron Scheduler to handle the periodic flush.
2) Make a rule that says anyone can delete expired messages and make it so that clients automatically delete expired messages when they are in a chat room.
Written here: https://firebase.google.com/docs/database/security/securing-data
You can't have it auto delete your data but you can make them unreadable (which is the same thing from the user standpoint). Just send a timestamp child field with you data and check against it.
{
"rules": {
"messages": {
"$message": {
// only messages from the last ten minutes can be read
".read": "data.child('timestamp').val() > (now - 600000)",
// new messages must have a string content and a number timestamp
".validate": "newData.hasChildren(['content', 'timestamp']) && newData.child('content').isString() && newData.child('timestamp').isNumber()"
}
}
}
}
Same question here.
You can't do it using firebase rules. You should either have a NodeJS backend removing your old data or clients doing it for you. For example, before a client retrieves data, he could remove old data.

Resources