Implement a recurring function in MapForce - recursion

im using Altova Mapforce to autogenerate xslt for transforming messages. It's a friendly tool with many built-in functions but i have a problem which i dont know how to solve. In a typical programming language it should be easy but i dont know how to implement it using the functions of MapForce. The problem is that in the initial message a Datagroup is allowed to be used from 0 to 99999 times while in the final message it is allowed only from 0 to 99. When the count of Datagroup in the initial message is greater than 99 then the remaining will be mapped to a second Datagroup etc so that all the occurences of the Datagroup in the first message will be mapped in the second message in 99 count groups. So we must break the iterations of the Datagroup of the first message to 99 count groups. The first function the first items with count 99. But then how could i check how much 99nth groups are left without using 100 times the skip first items function. I say 100 because 99999/99 = 100

Related

Vaadin 14 Lazy Loading Fetch iterations - Not what we expect

We are attempting to use the CallbackDataProvider class to perform lazy loading using a Grid component.
Our data source is using JPA implementation with pagination
Setting a page size = 20 running a query that would return 200 rows in the result set the callback seems to perform only 2 fetches, the first fetch for 20 rows, the second for the remaining 180 rows
This is not what we expected, we are expecting 20 rows on each fetch or for the 200 rows, 10 fetch of 20 rows each.
Is our expectation incorrect here?
Using this paradigm if there are 1000 or 2000 rows in the result set, I don't see how lazy loading is of any benefit here since fetching 980 rows on the second fetch defeat the lazy load purpose
Anyone have a similar experience or is there something we are missing?
The actual buffersize of the loaded data is determined by the components web client part. Pagesize is only the initial parameter. The default value of the pagesize is 50, which leads in normal circumstances Grid to load 100 items at the time. If web client determines that pagesize is too small based on it visual size, it will larger buffer. Usually pagesizes as small as 20 do not work well.

Debatching Biztalk flat file message into individual grouped flat files based on value

Have an issue where I am trying to debatch a flat file in BizTalk Server (comma delimited to tab-delimited) into individual flat files based on a value (in this example it would be PONumber) in the original file.
Sample input:
PartNumber,Weight,PONumber,Other
21519,234,46788,1
81919,456,47115,1
91910,789,47115,1
This would outcome into 2 messages such as:
PartNumber Weight PONumber Other
21519 234 46788 1
and
PartNumber Weight PONumber Other
81919 456 47115 1
91910 789 47115 1
I have seen similar things but no definite answers, or samples are dead links. Does anyone have a sample where they have done something like this or have a good solution?
Option 1: Convoy pattern
Change your schema so that it has a max occurs of 1 for the PO line, this will debatch each line into it's own messages when it is received.
Promote the PONumber so that it is a promoted property in the message context.
Have an Orchestration that has a correlation set based on the PO number, and initialises this on the first receive shape.
Have a receive shape with a following correlation that is in a wait shape inside a loop to receive all the other lines with the same PO number and combine them into a single message.
Option 2: Staging database
The other option is to just insert all of the rows into a SQL database, and then have a stored procedure that you poll that gets all the lines for a single PO.
This can sometimes be simpler, and avoids the issue of Zombies as you can implement this as a messaging only pattern or using a simpler Orcherstration without a loop.

confluent-kafka-python library: read offset per topic per consumer_group

Due to pykafka EOL we are in the process of migration to confluent-kafka-python. For pykafka we wrote an elaborated script that produced output in the format:
topic
consumer group
offset
topic_alpha
total_messages
100
topic_alpha
consumer_a
10
topic_alpha
consumer_b
25
I am wondering whether there is a Python code that knows how to do something similar for the confluent-kafka-python?
small print: there is a partial example on how to read offsets per given consumer_group. However, I struggle to get the list of consumer_group per topic without manually parsing __consumer_offsets.
Use admin_client.list_groups() to get a list of groups, and admin_client.list_topics() to get all topics and partitions in the cluster and client.get_watermark_offsets() for the given topics.
Then for each consumer group instantiate a new consumer with the corresponding group.id, create a TopicPartition list to query committed offsets for and then call c.committed() to retrieve the committed offsets.
Subtract the committed offsets from the high watermark to get th

How to count agents and save the number to a variable

I would like to count the number of agents that exits the Sink and save that number continuously to my variable "Loads" while i run the simulation.
!https://imgur.com/rAUQ52n
Anylogic suggets the following.. but i can't seem to get it right.
long count() - returns the number of agents exited via this Sink block.
hope some of you can help me out
It should be easy... you don't even need a variable since sink.count() has the information you need always.
But if you insist in creating one, You can also create a variable called counter of type int and on the sink action just use counter++;

adobe livecycle designer es2 percentages

I need to make some percentage fields, but I cannot make them like I need.
In total there will be 5 fields, with user input data format XXX%.
The first issue it that the fileds in total should sum 100% else an error message should appear.
The second issue, has to do with field format. The user should input 5 and the field should make it 005%.
Accordingly, user input 10 -> 010% and finally 100 -> 100%
Of course, the maximum input number should be the "100".
Any help?!?
Thank you in advance!
I assume the sum field total is calculated automatically using the calculate event, in that case you can use Javascript to display an error if all the field are filled and the total is not equal to 100.
For the dislpay, you need to use patterns, see documentation at : http://help.adobe.com/en_US/livecycle/9.0/designerHelp/index.htm?content=000490.html
For data validation, you can use validate event on the fields or manually check the value on the exit event of the field.

Resources