wso2 ESB calling Sequence Mediator in a chain - wso2-data-services-server

I have this following mediation flow in wso2 ESB for a client.
Sequence 1
Call data service
Check data availability
if available
Get data using data service
Manipulate data using payload factory
Iterate based on node
send data to client
get response
create payload based on response to data service
update database
end iterate
end if
end
Similar to the sequence 1, I have sequence 2, sequence 3.. sequence n calling different data services and different client endpoints. Sequence 1 works correctly, fetching data and updating the database. When the flow goes to sequence 2, while logging, I can see the contents/messages from sequence 1 found in sequence 2 which causes the sequence 2 perform erroneously. My question is, is there a way to like a java flush(), close() from moving from sequence 1 to sequence 2 in the wso2 ESB.
Thanks in advance.

Solution 1: You can use the clone mediator to create multiple instances of your message content.
Solution 2: Another possibility is to store the initial content (before sequence1 starts) in a and update the initial content again before sequence2 start. Use the enrich mediator for this.
As the clone mediator is not recommended (creates new threads!), I would go for the second solution with the enrich mediator.
<!-- store the initial message content -->
<enrich>
<source type="body" clone="true"/>
<target type="property" property="BodyBackup"/>
</enrich>
<sequence key="sequence1"/>
<!-- restore the message content -->
<enrich>
<source type="property" property="BodyBackup"/>
<target type="body"/>
</enrich>
<sequence key="sequence2"/>

Related

Kafka Producer for internal topics with individual configs

One of my topologies generates an internal topic e.g. KSTREAM-AGGREGATE-STATE-STORE-0000000031 (see snipped from below) and for which an internal topic <app-id>KSTREAM-AGGREGATE-STATE-STORE-0000000031-changelog is created.
<...>
Processor: KSTREAM-FLATMAPVALUES-0000000022 (stores: [])
--> KSTREAM-AGGREGATE-0000000032, KSTREAM-FLATMAP-0000000027, KSTREAM-MAP-0000000023, KSTREAM-MAP-0000000025, KSTREAM-MAP-0000000029
<-- KSTREAM-TRANSFORMVALUES-0000000017
Processor: KSTREAM-AGGREGATE-0000000032 (stores: [KSTREAM-AGGREGATE-STATE-STORE-0000000031])
--> KTABLE-TOSTREAM-0000000033
<-- KSTREAM-FLATMAPVALUES-0000000022
Processor: KTABLE-TOSTREAM-0000000033 (stores: [])
--> KSTREAM-PEEK-0000000034
<-- KSTREAM-AGGREGATE-0000000032
<...>
the topology is defined as follows (BusObjKey and BusObj both are Avro Objects with according serdes, TransformBusObj provides the business logic for the aggregation and later mapping)
<...>
KStream<BusObjKey, BusObj> busObjStream = otherBusObjStream
.groupByKey()
.windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5)))
.aggregate(BusObj::new,
TransformBusObj::aggregate,
Materialized.with(busObjKeySerde, busObjSerde))
.toStream()
.map(TransformBusObj::map);
<...>
How can I controll the properties used for the producer creating as well sending messages to <app-id>KSTREAM-AGGREGATE-STATE-STORE-0000000031-changelog ? In particular I would need to turn compression on (e.g. config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy") ). Since I do not want to have compression all over the other producers I wonder of how to achive this in Spring Boot.
If you use the config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy"), it will turn on compression for all the producers in the Kafka Streams application. The only workaround I can think of would be to provide your own producer via the overloaded KafkaStreams constructor that accepts a KafkaClientSupplier. In your custom producer, you can inspect the topic name before sending and manually compress the data. Since you're manually compressing the data, I believe you'd also have to provide a custom restore consumer that "knows" to decompress. But I'm not sure this suggestion would even work or would be worth the effort.
HTH,
Bill

Sabre: How to pass 2 OTA_AirPriceRQ in EnhancedAirBookRQ?

How can I pass 2 OTA_AirPriceRQ in 1 EnhancedAirBookRQ for booking RoundTrip in Sabre?
Consider the below example:
<EnhancedAirBookRQ>
<OTA_AirBookRQ>
...
<FlightSegment>
<!-- Segment 1 Details -->
<FlightSegment/>
<FlightSegment>
<!-- Segment 2 Details -->
<FlightSegment/>
<OTA_AirPriceRQ>
<PriceRequestInformation>
<OptionalQualifiers>
<PricingQualifiers CurrencyCode='INR'>
<PassengerType Code='ADT' Force='true' Quantity='1'/>
</PricingQualifiers>
</OptionalQualifiers>
</PriceRequestInformation>
</OTA_AirPriceRQ>
<PostProcessing IgnoreAfter="false">
<RedisplayReservation/>
</PostProcessing>
<EnhancedAirBookRQ>
So from above code,I wanted to pass another OTA_AirPriceRQ for Segment 2, to achieve RoundTrip.
But I get error when I repeat OTA_AirPriceRQ Tag.
Try with SegmentSelect element under PriceRequestInformation/OptionalQualifiers/PricingQualifiers/ItineraryOptions.
By default, all segments will be priced the same way, so unless you want to do something special for particular segment/s, you don't need to add extra qualifiers.
OTA_AirPriceRQ is used to get pricing information (price breakdown) for a specific travel, however travel details (origin, destination, classes, flight numbers, etc) you have to provide in OTA_AirBookRQ. It's a part of a EnhancedAirBookRQ transaction you are using (but should be defined prior to OTA_AirPriceRQ in request xml)
To sum it up - when you provide in OTA_AirBookRQ information about outbound and inbound flight, then 1 OTA_AirPriceRQ returns you full pricing information (no seperate AirPriceRQ is required)
You can find more information here
https://developer.sabre.com/docs/read/soap_apis/air/book/orchestrated_air_booking

How can reserve Air Seats for all segments in a given PNR?

I am planning to use the <AirSeatRQ> request using Sabre's SOAP API, but according to the documentation, you have to request a seat assignment for each passenger on each segment with the user's preference.
Something like this according to the example on Dev Studio:
<AirSeatRQ ReturnHostCommand="false" TimeStamp="2011-10-27T15:30:00-06:00" Version="2.0.0">
<!--Repeat Factor=0-->
<Seats>
<Seat BoardingPass="true" ChangeOfGauge="true" NameNumber="1.1" Number="21A" Preference="AN" SegmentNumber="1"/>
</Seats>
</AirSeatRQ>
Also, according to the request documentation, the repeat factor for the <Seats> request is zero. Does that mean that if I want to assign seats for all passengers on all segments do I have to send several requests?
Ideally, I would like to have the seats for all passengers in all segments automatically assigned after reading the PNR. Is that possible through Web Services?
Checking the <PassengerDetailsRQ> XML Schema definition, an <AirSeatRQ> can be sent along. I guess you can perform a standalone <AirSeatRQ> request, but bundling it with the passenger details is easier and save us from making extra requests to Sabre's API.
You have to send a <Seat\> request for each passenger in each segment of the itinerary. This is a working example I did for a two legs itinerary, each leg consisting of two segments for two adults:
I'm omitting most of the passenger details properties and focusing on the AirSeat element:
<PassengerDetailsRQ Version="2.3.0">
<PriceQuoteInfo HaltOnError="true"></PriceQuoteInfo>
<SpecialReqDetails>
<AddRemarkRQ>
<RemarkInfo>
<Remark Code="H" Type="General">
<Text>THANK YOU FOR BOOKING MAURICIO CUENCA AIRLINES</Text>
</Remark>
</RemarkInfo>
</AddRemarkRQ>
<AirSeatRQ>
<Seats>
<Seat NameNumber="1.1" Preference="AN" SegmentNumber="1"/>
<Seat NameNumber="1.2" Preference="AN" SegmentNumber="2"/>
<Seat NameNumber="1.1" Preference="AN" SegmentNumber="3"/>
<Seat NameNumber="1.2" Preference="AN" SegmentNumber="4"/>
</Seats>
</AirSeatRQ>
<SpecialServiceRQ HaltOnError="true">
<SpecialServiceInfo></SpecialServiceInfo>
</SpecialServiceRQ>
</SpecialReqDetails>
<TravelItineraryAddInfoRQ HaltOnError="true">
<AgencyInfo></AgencyInfo>
<CustomerInfo></CustomerInfo>
</TravelItineraryAddInfoRQ>
</PassengerDetailsRQ>
This way, right after the PNR is created, all seats for all passengers in every segment are already assigned and there is no need for further requests asking for seat assignments.
that seems to be the case.
Testing multiple <Seat> elements inside <Seats> returns a schema validation error. Same when using multiple <Seats> elements.
Looks like the only option right now is to send multiple requests, one for each passenger on each segment.

Holding data processing for incomplete data sets with Mule and a collection-aggregator

I need to collect and process sets of files generated by another organization. For simplicity, say that the set consists of two files, a summary file and a detail file named like: SUM20150701.dat and DTL20150701.dat, which would constitute a set for date 20150701. The issue is, sets need to be processed in order, and the transmission of files from an outside organization can be error prone such that a file may be missing. If this occurs, this set of files should hold, as should any following sets that are found. As example, at the start of the mule process, the source folder may have in it: SUM20150701.dat, SUM20150703.dat, DTL20150703.dat. That is, the data set for 20150701 is incomplete while 20150703 is complete. I need to have both data sets hold until DTL20150701.dat arrives, then process them in order.
In this simplified form of my mule process a source folder is watched for files. When found, they are moved to an archive folder and passed to the collection-aggregator using the date as the sequence and correlation values. When a set is complete, it is moved to a destination folder. A lengthy timeout is used on the collector to make sure incomplete sets are not processed:
<file:connector name="File" autoDelete="false" streaming="false" validateConnections="true" doc:name="File">
<file:expression-filename-parser />
</file:connector>
<file:connector name="File1" autoDelete="false" outputAppend="true" streaming="false" validateConnections="true" doc:name="File" />
<vm:connector name="VM" validateConnections="true" doc:name="VM">
<receiver-threading-profile maxThreadsActive="1"></receiver-threading-profile>
</vm:connector>
<flow name="fileaggreFlow2" doc:name="fileaggreFlow2">
<file:inbound-endpoint path="G:\SourceDir" moveToDirectory="g:\SourceDir\Archive" connector-ref="File1" doc:name="get-working-files"
responseTimeout="10000" pollingFrequency="5000" fileAge="600000" >
<file:filename-regex-filter pattern="DTL(.*).dat|SUM(.*).dat" caseSensitive="false"/>
</file:inbound-endpoint>
<message-properties-transformer overwrite="true" doc:name="Message Properties">
<add-message-property key="MULE_CORRELATION_ID" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
<add-message-property key="MULE_CORRELATION_GROUP_SIZE" value="2"/>
<add-message-property key="MULE_CORRELATION_SEQUENCE" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
</message-properties-transformer>
<vm:outbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
</flow>
<flow name="fileaggreFlow1" doc:name="fileaggreFlow1" processingStrategy="synchronous">
<vm:inbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
<processor-chain doc:name="Processor Chain">
<collection-aggregator timeout="1000000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<foreach doc:name="For Each">
<file:outbound-endpoint path="G:\DestDir1" outputPattern="#[function:datestamp:yyyyMMdd.HHmmss].#[message.inboundProperties.originalFilename]" responseTimeout="10000" connector-ref="File1" doc:name="Destination"/>
</foreach>
</processor-chain>
This correctly processes sets found in order if all sets are complete. It correctly waits for incomplete sets to fill, but does not hold following sets, that is in the above example set 20150703 will process while 20150701 is still waiting for the DTL file.
Is there a setting or another construct which will force the collection-aggregator element to wait if there is an earlier collection which is not complete?
I am using the date part of the file name for both the correlation and sequence ID’s which does control that sets process in the order I want if all sets are complete. It is not important if dates do not exist (as with 20150702 in this case), only that existing files are processed in order and that sets must be complete.
In the end, I could not get the Collection-Aggregator to do this. To overcome this, I built a Java class which contain Maps for the SUM and DTL files, with the Correlation ID as the key, and a sorted list of open keys.
The Java class then monitored for a completed set on the smallest key and signals back to the Mule flow when that set is available for processing.
The Mule flow must be put into synchronous mode while processing the files to prevent a data race situation. When complete, it signals the Java class that the processing is complete and the set of data can be dropped from the list/Maps, and receives an indication back if the next set is ready to process.
It is not the prettiest, and I would have preferred to not have used custom features for this, but it gets the job done.

Dynamics AX 2009 AIF Tables

Background
I have an issue where roughly once a month the AIFQueueManager table is populated with ~150 records which relate to messages which had been sent to AX (where they "successfully failed"; i.e. errorred due to violation of business rules, but returned an exception as expected) over 6 months ago.
Question
What tables are involved in the AIF inbound message process / what order to events occur in? e.g. XML file is picked up and recorded in the AifDocumentLog, data's extracted and added to the AifQueueManager and AifGatewayQueue tables, records from here are then inserted in the AifMessageLog, etc.
Thanks in advance.
There are 4 main AIF classes, I will be talking about the inbound only, and focusing on the included file system adapter and flat XML files. I hope this makes things a little less hazy.
AIFGatewayReceiveService - Uses adapters/channels to read messages in from different sources, and dumps them in the AifGatewayQueue table
AIFInboundProcessingService - This processes the AifGatewayQueue table data and sends to the Ax[Document] classes
AIFOutboundProcessingService - This is the inverse of #2. It creates XMLs with relevent metadata
AIFGatewaySendService - This is the inverse of #1, where it uses adapters/channels to send messages out to different locations from the AifGatewayQueue
For #1
So #1 basically fills the AifGatewayQueue, which is just a queue of work. It loops through all of your channels and then finds the relevant adapter by ClassId. The adapters are classes that implement AifIntegrationAdapter and AifReceiveAdapter if you wanted to make your own custom one. When it loops over the different channels, it then loops over each "message" and tries to receive it into the queue.
If it can't process the file for some reason, it catches exceptions and throws them in the SysExceptionTable [Basic>Periodic>Application Integration Framework>Exceptions]. These messages are scraped from the infolog, and the messages are generated mostly from the receive adaptor, which would be AifFileSystemReceiveAdapter for my example.
For #2
So #2 is processing the inbound messages sitting in the queue (ready/inprocess). The AifRequestProcessor\processServiceRequest does the work.
From this method, it will call:
Various calls to Classes\AifMessageManager, which puts records in the AifMessageLog and the AifDocumentLog.
This key line: responseMessage = AifRequestProcessor::executeServiceOperation(message, endpointActionPolicy); which actually does the operation against the Ax[Document] classes by eventually getting to AifDispatcher::callServiceMethod(...)
It gets the return XML and packages that into an AifMessage called responseMessage and returns that where it may be logged. It also takes that return value, and if there is a response channel, it submits that back into the AifGatewayQueue
AifQueueManager is actually cleared and populated on the fly by calling AifQueueManager::createQueueManagerData();.

Resources