Whats the relation between load and for? - tsung

I am trying tsung for the first time, however, I need some clarification.
I am using load tag as:
<load>
<arrivalphase phase="1" duration="1" unit="minute">
<users maxnumber="100000" interarrival="0.01" unit="second"/>
</arrivalphase>
</load>
But, how would the for loop below works ?:
<sessions>
<session name="root" probability="100" type="ts_http">
<for from="1" to="2" var="i">
<request>
<http url="/test/counter" method="POST" contents="bla=blu&name=glop">
</http>
</request>
</for>
</session>
What I thought is that the loop will count from 1 to 2, thus, sending only two requests, however, when I run the xml file, I got hundred of requests! Does this mean that each user in arrivalphase will send two requests as in the for loop above?
Can someone explain, what's the relation between the for tag and load tag as in the above example?

Your analysis is right , during the first 1 minute of the test , you created 100 users per second,each user will send two requests as in the for loop above。
The load define tsung generate rules of the user, the session define every user needs to perform logic.

Related

Oozie coordinator - file event based trigger - multiple firing

Im trying to understand why an oozie 4.2 based coordinator job which should wait for a dataset fires multiple times. My coordinator job looks like this
<coordinator-app name="ConfirmDataMasterTrigger"
frequency="${frequencyMins}"
start="${startTime}"
end="${endTime}"
timezone="${timeZoneDef}"
xmlns="uri:oozie:coordinator:0.4"
xmlns:sla="uri:oozie:sla:0.2">
<controls>
<timeout>${TimeOutMins}</timeout>
<concurrency>${Concurrency}</concurrency>
<execution>${Execution}</execution>
</controls>
<datasets>
<dataset name="inputDS"
frequency="${coord:days(1)}"
initial-instance="${startTime}"
timezone="${timeZoneDef}">
<uri-template>${triggerFileDir}</uri-template>
<done-flag></done-flag>
</dataset>
</datasets>
<input-events>
<data-in name="ConfirmDataMasterTrigInput"
dataset="inputDS">
<instance>${coord:current(0)}</instance>
</data-in>
</input-events>
<action>
<workflow>
<app-path>${workflowAppPath}</app-path>
<configuration>
<property>
<name>SaveDateString</name>
<value>${coord:formatTime(coord:actualTime(),"-yyyyMMdd-HHmmss")}</value>
</property>
<property>
<name>WaitForThisInputData</name>
<value>${coord:dataIn('ConfirmDataMasterTrigInput')}</value>
</property>
</configuration>
</workflow>
</action>
With a properties file that looks like this
nameNode=hdfs://hc1m1.nec.co.nz:8020
jobTracker=hc1r1m2.nec.co.nz:8050
hdfsUser=oozie
wfProject=ConfirmDataMaster
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
moveFile=ConfirmDataMaster_edit.csv
sourceDir=${nameNode}/mule/sheets/input/ConfirmDataMaster/
targetDir=/mule/sheets/store/
sourceFile=${sourceDir}${moveFile}
targetFile=${targetDir}${moveFile}
frequencyMins=10
startTime=2016-07-31T12:00Z
endTime=2099-01-01T12:00Z
timeZoneDef=GMT+12:00
TimeOutMins=10
Concurrency=1
Execution=FIFO
triggerDir=trigger/
triggerFileDir=${sourceDir}${triggerDir}
doneFlag=trigger.dat
workflowAppPath=${nameNode}/user/${hdfsUser}/wf/${wfProject}
oozie.coord.application.path=${nameNode}/user/${hdfsUser}/wf/${wfProject}
I am not having a problem in getting a work flow to to be triggered by a
coordinator given a data set based event. What I am seeing is that the under lying workflow is continuously triggered. Can anyone advise changes I should make or my error. Obviously my workflow cleans up and deletes the trigger path. Thanks in advance.
Ill answer my own question because Ive worked out the solution and its a bit obvious really. I was just a little confused. The Firing frequency is controlled by the coordinator and data set frequencies and also the trigger directory and file. If you dont want a trigger file then leave done-flag empty. If that is not added then a default flag file is _SUCCESS.
So if the trigger is available the workflow will fire at those frequencies specified. So I have changed my cord and data set frequencies to be 30 ( mins ). As a final task my workflow removes the trigger.

How can reserve Air Seats for all segments in a given PNR?

I am planning to use the <AirSeatRQ> request using Sabre's SOAP API, but according to the documentation, you have to request a seat assignment for each passenger on each segment with the user's preference.
Something like this according to the example on Dev Studio:
<AirSeatRQ ReturnHostCommand="false" TimeStamp="2011-10-27T15:30:00-06:00" Version="2.0.0">
<!--Repeat Factor=0-->
<Seats>
<Seat BoardingPass="true" ChangeOfGauge="true" NameNumber="1.1" Number="21A" Preference="AN" SegmentNumber="1"/>
</Seats>
</AirSeatRQ>
Also, according to the request documentation, the repeat factor for the <Seats> request is zero. Does that mean that if I want to assign seats for all passengers on all segments do I have to send several requests?
Ideally, I would like to have the seats for all passengers in all segments automatically assigned after reading the PNR. Is that possible through Web Services?
Checking the <PassengerDetailsRQ> XML Schema definition, an <AirSeatRQ> can be sent along. I guess you can perform a standalone <AirSeatRQ> request, but bundling it with the passenger details is easier and save us from making extra requests to Sabre's API.
You have to send a <Seat\> request for each passenger in each segment of the itinerary. This is a working example I did for a two legs itinerary, each leg consisting of two segments for two adults:
I'm omitting most of the passenger details properties and focusing on the AirSeat element:
<PassengerDetailsRQ Version="2.3.0">
<PriceQuoteInfo HaltOnError="true"></PriceQuoteInfo>
<SpecialReqDetails>
<AddRemarkRQ>
<RemarkInfo>
<Remark Code="H" Type="General">
<Text>THANK YOU FOR BOOKING MAURICIO CUENCA AIRLINES</Text>
</Remark>
</RemarkInfo>
</AddRemarkRQ>
<AirSeatRQ>
<Seats>
<Seat NameNumber="1.1" Preference="AN" SegmentNumber="1"/>
<Seat NameNumber="1.2" Preference="AN" SegmentNumber="2"/>
<Seat NameNumber="1.1" Preference="AN" SegmentNumber="3"/>
<Seat NameNumber="1.2" Preference="AN" SegmentNumber="4"/>
</Seats>
</AirSeatRQ>
<SpecialServiceRQ HaltOnError="true">
<SpecialServiceInfo></SpecialServiceInfo>
</SpecialServiceRQ>
</SpecialReqDetails>
<TravelItineraryAddInfoRQ HaltOnError="true">
<AgencyInfo></AgencyInfo>
<CustomerInfo></CustomerInfo>
</TravelItineraryAddInfoRQ>
</PassengerDetailsRQ>
This way, right after the PNR is created, all seats for all passengers in every segment are already assigned and there is no need for further requests asking for seat assignments.
that seems to be the case.
Testing multiple <Seat> elements inside <Seats> returns a schema validation error. Same when using multiple <Seats> elements.
Looks like the only option right now is to send multiple requests, one for each passenger on each segment.

wso2 ESB calling Sequence Mediator in a chain

I have this following mediation flow in wso2 ESB for a client.
Sequence 1
Call data service
Check data availability
if available
Get data using data service
Manipulate data using payload factory
Iterate based on node
send data to client
get response
create payload based on response to data service
update database
end iterate
end if
end
Similar to the sequence 1, I have sequence 2, sequence 3.. sequence n calling different data services and different client endpoints. Sequence 1 works correctly, fetching data and updating the database. When the flow goes to sequence 2, while logging, I can see the contents/messages from sequence 1 found in sequence 2 which causes the sequence 2 perform erroneously. My question is, is there a way to like a java flush(), close() from moving from sequence 1 to sequence 2 in the wso2 ESB.
Thanks in advance.
Solution 1: You can use the clone mediator to create multiple instances of your message content.
Solution 2: Another possibility is to store the initial content (before sequence1 starts) in a and update the initial content again before sequence2 start. Use the enrich mediator for this.
As the clone mediator is not recommended (creates new threads!), I would go for the second solution with the enrich mediator.
<!-- store the initial message content -->
<enrich>
<source type="body" clone="true"/>
<target type="property" property="BodyBackup"/>
</enrich>
<sequence key="sequence1"/>
<!-- restore the message content -->
<enrich>
<source type="property" property="BodyBackup"/>
<target type="body"/>
</enrich>
<sequence key="sequence2"/>

Holding data processing for incomplete data sets with Mule and a collection-aggregator

I need to collect and process sets of files generated by another organization. For simplicity, say that the set consists of two files, a summary file and a detail file named like: SUM20150701.dat and DTL20150701.dat, which would constitute a set for date 20150701. The issue is, sets need to be processed in order, and the transmission of files from an outside organization can be error prone such that a file may be missing. If this occurs, this set of files should hold, as should any following sets that are found. As example, at the start of the mule process, the source folder may have in it: SUM20150701.dat, SUM20150703.dat, DTL20150703.dat. That is, the data set for 20150701 is incomplete while 20150703 is complete. I need to have both data sets hold until DTL20150701.dat arrives, then process them in order.
In this simplified form of my mule process a source folder is watched for files. When found, they are moved to an archive folder and passed to the collection-aggregator using the date as the sequence and correlation values. When a set is complete, it is moved to a destination folder. A lengthy timeout is used on the collector to make sure incomplete sets are not processed:
<file:connector name="File" autoDelete="false" streaming="false" validateConnections="true" doc:name="File">
<file:expression-filename-parser />
</file:connector>
<file:connector name="File1" autoDelete="false" outputAppend="true" streaming="false" validateConnections="true" doc:name="File" />
<vm:connector name="VM" validateConnections="true" doc:name="VM">
<receiver-threading-profile maxThreadsActive="1"></receiver-threading-profile>
</vm:connector>
<flow name="fileaggreFlow2" doc:name="fileaggreFlow2">
<file:inbound-endpoint path="G:\SourceDir" moveToDirectory="g:\SourceDir\Archive" connector-ref="File1" doc:name="get-working-files"
responseTimeout="10000" pollingFrequency="5000" fileAge="600000" >
<file:filename-regex-filter pattern="DTL(.*).dat|SUM(.*).dat" caseSensitive="false"/>
</file:inbound-endpoint>
<message-properties-transformer overwrite="true" doc:name="Message Properties">
<add-message-property key="MULE_CORRELATION_ID" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
<add-message-property key="MULE_CORRELATION_GROUP_SIZE" value="2"/>
<add-message-property key="MULE_CORRELATION_SEQUENCE" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
</message-properties-transformer>
<vm:outbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
</flow>
<flow name="fileaggreFlow1" doc:name="fileaggreFlow1" processingStrategy="synchronous">
<vm:inbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
<processor-chain doc:name="Processor Chain">
<collection-aggregator timeout="1000000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<foreach doc:name="For Each">
<file:outbound-endpoint path="G:\DestDir1" outputPattern="#[function:datestamp:yyyyMMdd.HHmmss].#[message.inboundProperties.originalFilename]" responseTimeout="10000" connector-ref="File1" doc:name="Destination"/>
</foreach>
</processor-chain>
This correctly processes sets found in order if all sets are complete. It correctly waits for incomplete sets to fill, but does not hold following sets, that is in the above example set 20150703 will process while 20150701 is still waiting for the DTL file.
Is there a setting or another construct which will force the collection-aggregator element to wait if there is an earlier collection which is not complete?
I am using the date part of the file name for both the correlation and sequence ID’s which does control that sets process in the order I want if all sets are complete. It is not important if dates do not exist (as with 20150702 in this case), only that existing files are processed in order and that sets must be complete.
In the end, I could not get the Collection-Aggregator to do this. To overcome this, I built a Java class which contain Maps for the SUM and DTL files, with the Correlation ID as the key, and a sorted list of open keys.
The Java class then monitored for a completed set on the smallest key and signals back to the Mule flow when that set is available for processing.
The Mule flow must be put into synchronous mode while processing the files to prevent a data race situation. When complete, it signals the Java class that the processing is complete and the set of data can be dropped from the list/Maps, and receives an indication back if the next set is ready to process.
It is not the prettiest, and I would have preferred to not have used custom features for this, but it gets the job done.

Voiceglue Logger says Maximum loop count exceeded. There is probably an infinite loop of in your VXML document

Can Any please explain why this is happening. what are the possibilities of errors that are been counted as I have set maxerrorcount = 3
EROR OPEN_VXI luke---- callid=[68] |1098905920|68|CRITICAL|com.vocalocity.vxi|216|VXIinterpreterRun: Maximum loop count exceeded. There is probably an infinite loop of in your VXML document.|URL
Please let me know if any further details are required.
Perhaps, "infinite loop" means to call same form again and again,
And it was not inserted caller input process(menu, field and record form) in this loop.
For example
<form id="errorForm"><!-- Loop Start -->
<block>
<!-- something -->
</block>
<block>
<goto next="errorForm" /><!-- Loop End -->
</block>
</form>
Bladean's answer is probably the correct one. There is an alternate possibility. If an application is structured has looping logic that cycles through the same form or page as it processes data (e.g. a long list), you can trigger these types of checks. I have had to increase a similar loop counter, for some applications, on another platform.
Voice browser all have infinte loop detection to save them from pitfalls .
It could be something as simple as the "goto where I am coming from example" within the same VXML document example provided here by Bladean Mericle .
It could be burried deeper in a global catch that routes calls to a catch all sub application which in turn bring the flow back to the originating dialog .
Definetely Infinite Loops will never work in VXML .

Resources