In general, everything is fine, but the log is stacking a lot at "ant-media-server.log"
2021-05-08 12:14:08,756 [Thread-89] INFO i.a.streamsource.StreamFetcher - last dts4156022956 is bigger than incoming dts 4236715763
2021-05-08 12:14:08,756 [Thread-89] INFO i.a.streamsource.StreamFetcher - dts (4236715764) is bigger than pts(4156022956)
A few hundred megabytes even in a moment
How can I solve this?
There are several ways for doing that.
Change the log level to WARN in web panel and click the Save button
If you want to keep the log level in INFO but you still don't want to have these logs, then please open to conf/logback.xml and add following line before ```
<logger name="io.antmedia.streamsource.StreamFetcher" level="WARN" />
The file should look like something below
....
<logger name="org.apache.jasper.servlet" level="${logLevel}" />
<logger name="ch.qos" level="${logLevel}" />
<logger name="com.antstreaming" level="${logLevel}"/>
<logger name="org.quartz" level="${logLevel}"/>
<logger name="org.apache.catalina.core.ApplicationContext" level="${logLevel}" />
<logger name="io.antmedia.streamsource.StreamFetcher" level="WARN" />
Related
I am trying tsung for the first time, however, I need some clarification.
I am using load tag as:
<load>
<arrivalphase phase="1" duration="1" unit="minute">
<users maxnumber="100000" interarrival="0.01" unit="second"/>
</arrivalphase>
</load>
But, how would the for loop below works ?:
<sessions>
<session name="root" probability="100" type="ts_http">
<for from="1" to="2" var="i">
<request>
<http url="/test/counter" method="POST" contents="bla=blu&name=glop">
</http>
</request>
</for>
</session>
What I thought is that the loop will count from 1 to 2, thus, sending only two requests, however, when I run the xml file, I got hundred of requests! Does this mean that each user in arrivalphase will send two requests as in the for loop above?
Can someone explain, what's the relation between the for tag and load tag as in the above example?
Your analysis is right , during the first 1 minute of the test , you created 100 users per second,each user will send two requests as in the for loop above。
The load define tsung generate rules of the user, the session define every user needs to perform logic.
Im trying to understand why an oozie 4.2 based coordinator job which should wait for a dataset fires multiple times. My coordinator job looks like this
<coordinator-app name="ConfirmDataMasterTrigger"
frequency="${frequencyMins}"
start="${startTime}"
end="${endTime}"
timezone="${timeZoneDef}"
xmlns="uri:oozie:coordinator:0.4"
xmlns:sla="uri:oozie:sla:0.2">
<controls>
<timeout>${TimeOutMins}</timeout>
<concurrency>${Concurrency}</concurrency>
<execution>${Execution}</execution>
</controls>
<datasets>
<dataset name="inputDS"
frequency="${coord:days(1)}"
initial-instance="${startTime}"
timezone="${timeZoneDef}">
<uri-template>${triggerFileDir}</uri-template>
<done-flag></done-flag>
</dataset>
</datasets>
<input-events>
<data-in name="ConfirmDataMasterTrigInput"
dataset="inputDS">
<instance>${coord:current(0)}</instance>
</data-in>
</input-events>
<action>
<workflow>
<app-path>${workflowAppPath}</app-path>
<configuration>
<property>
<name>SaveDateString</name>
<value>${coord:formatTime(coord:actualTime(),"-yyyyMMdd-HHmmss")}</value>
</property>
<property>
<name>WaitForThisInputData</name>
<value>${coord:dataIn('ConfirmDataMasterTrigInput')}</value>
</property>
</configuration>
</workflow>
</action>
With a properties file that looks like this
nameNode=hdfs://hc1m1.nec.co.nz:8020
jobTracker=hc1r1m2.nec.co.nz:8050
hdfsUser=oozie
wfProject=ConfirmDataMaster
oozie.libpath=${nameNode}/user/oozie/share/lib
oozie.use.system.libpath=true
oozie.wf.rerun.failnodes=true
moveFile=ConfirmDataMaster_edit.csv
sourceDir=${nameNode}/mule/sheets/input/ConfirmDataMaster/
targetDir=/mule/sheets/store/
sourceFile=${sourceDir}${moveFile}
targetFile=${targetDir}${moveFile}
frequencyMins=10
startTime=2016-07-31T12:00Z
endTime=2099-01-01T12:00Z
timeZoneDef=GMT+12:00
TimeOutMins=10
Concurrency=1
Execution=FIFO
triggerDir=trigger/
triggerFileDir=${sourceDir}${triggerDir}
doneFlag=trigger.dat
workflowAppPath=${nameNode}/user/${hdfsUser}/wf/${wfProject}
oozie.coord.application.path=${nameNode}/user/${hdfsUser}/wf/${wfProject}
I am not having a problem in getting a work flow to to be triggered by a
coordinator given a data set based event. What I am seeing is that the under lying workflow is continuously triggered. Can anyone advise changes I should make or my error. Obviously my workflow cleans up and deletes the trigger path. Thanks in advance.
Ill answer my own question because Ive worked out the solution and its a bit obvious really. I was just a little confused. The Firing frequency is controlled by the coordinator and data set frequencies and also the trigger directory and file. If you dont want a trigger file then leave done-flag empty. If that is not added then a default flag file is _SUCCESS.
So if the trigger is available the workflow will fire at those frequencies specified. So I have changed my cord and data set frequencies to be 30 ( mins ). As a final task my workflow removes the trigger.
I need to collect and process sets of files generated by another organization. For simplicity, say that the set consists of two files, a summary file and a detail file named like: SUM20150701.dat and DTL20150701.dat, which would constitute a set for date 20150701. The issue is, sets need to be processed in order, and the transmission of files from an outside organization can be error prone such that a file may be missing. If this occurs, this set of files should hold, as should any following sets that are found. As example, at the start of the mule process, the source folder may have in it: SUM20150701.dat, SUM20150703.dat, DTL20150703.dat. That is, the data set for 20150701 is incomplete while 20150703 is complete. I need to have both data sets hold until DTL20150701.dat arrives, then process them in order.
In this simplified form of my mule process a source folder is watched for files. When found, they are moved to an archive folder and passed to the collection-aggregator using the date as the sequence and correlation values. When a set is complete, it is moved to a destination folder. A lengthy timeout is used on the collector to make sure incomplete sets are not processed:
<file:connector name="File" autoDelete="false" streaming="false" validateConnections="true" doc:name="File">
<file:expression-filename-parser />
</file:connector>
<file:connector name="File1" autoDelete="false" outputAppend="true" streaming="false" validateConnections="true" doc:name="File" />
<vm:connector name="VM" validateConnections="true" doc:name="VM">
<receiver-threading-profile maxThreadsActive="1"></receiver-threading-profile>
</vm:connector>
<flow name="fileaggreFlow2" doc:name="fileaggreFlow2">
<file:inbound-endpoint path="G:\SourceDir" moveToDirectory="g:\SourceDir\Archive" connector-ref="File1" doc:name="get-working-files"
responseTimeout="10000" pollingFrequency="5000" fileAge="600000" >
<file:filename-regex-filter pattern="DTL(.*).dat|SUM(.*).dat" caseSensitive="false"/>
</file:inbound-endpoint>
<message-properties-transformer overwrite="true" doc:name="Message Properties">
<add-message-property key="MULE_CORRELATION_ID" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
<add-message-property key="MULE_CORRELATION_GROUP_SIZE" value="2"/>
<add-message-property key="MULE_CORRELATION_SEQUENCE" value="#[message.inboundProperties.originalFilename.substring(5, message.inboundProperties.originalFilename.lastIndexOf('.'))]"/>
</message-properties-transformer>
<vm:outbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
</flow>
<flow name="fileaggreFlow1" doc:name="fileaggreFlow1" processingStrategy="synchronous">
<vm:inbound-endpoint exchange-pattern="one-way" path="Merge" doc:name="VM" connector-ref="VM"/>
<processor-chain doc:name="Processor Chain">
<collection-aggregator timeout="1000000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<foreach doc:name="For Each">
<file:outbound-endpoint path="G:\DestDir1" outputPattern="#[function:datestamp:yyyyMMdd.HHmmss].#[message.inboundProperties.originalFilename]" responseTimeout="10000" connector-ref="File1" doc:name="Destination"/>
</foreach>
</processor-chain>
This correctly processes sets found in order if all sets are complete. It correctly waits for incomplete sets to fill, but does not hold following sets, that is in the above example set 20150703 will process while 20150701 is still waiting for the DTL file.
Is there a setting or another construct which will force the collection-aggregator element to wait if there is an earlier collection which is not complete?
I am using the date part of the file name for both the correlation and sequence ID’s which does control that sets process in the order I want if all sets are complete. It is not important if dates do not exist (as with 20150702 in this case), only that existing files are processed in order and that sets must be complete.
In the end, I could not get the Collection-Aggregator to do this. To overcome this, I built a Java class which contain Maps for the SUM and DTL files, with the Correlation ID as the key, and a sorted list of open keys.
The Java class then monitored for a completed set on the smallest key and signals back to the Mule flow when that set is available for processing.
The Mule flow must be put into synchronous mode while processing the files to prevent a data race situation. When complete, it signals the Java class that the processing is complete and the set of data can be dropped from the list/Maps, and receives an indication back if the next set is ready to process.
It is not the prettiest, and I would have preferred to not have used custom features for this, but it gets the job done.
Suppose I have this target defined:
<TargetEndpoint name="default">
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<HTTPTargetConnection>
<URL>http://host1.example.com:39282</URL>
</HTTPTargetConnection>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
<FaultRules>
....
</FaultRules>
</TargetEndpoint>
How do I catch target errors like "Connection refused", which would occur when the target is not listening on the given port, or "Host not reachable", which would occur if there is no route to the given host or if the DNS name is not resolvable?
Basically I want the recipe for how to specify the FaultRule that would be placed inside the FaultRules element above.
<FaultRules>
<FaultRule name="catch1">
<Condition>WHAT GOES HERE???</Condition> <!--- ??? --->
<Step>
<Name>AssignMessage-1</Name>
</Step>
</FaultRule>
<FaultRule name="catch2">
<Step>
<Name>AssignMessage-2</Name>
</Step>
</FaultRule>
</FaultRules>
You can use the fault.name variable to check the error (http://apigee.com/docs/api-services/content/policy-attachment-and-enforcement#policy-based-fault-handling)
For example:
<TargetEndpoint name="default">
<FaultRules>
<FaultRule name="bad_network">
<Condition>(fault.name = "ServiceUnavailable")</Condition>
...
The way I got to 'ServiceUnavailable' as the fault name, was by first having the FaultRule policy with no conditions, and then trying a few 'bad target' scenarios - right address/wrong port and wrong address/name both generate the same fault name and can be caught using the above snippet.
To see the error name, you just need to add in an AssignMessage policy a block to read the fault.name variable and in the trace it'll be displayed in the 'Variables Got' session, or assign it to the payload of your response.
Once you have captured all the faults you want to handle, you can go back and modify the proxy faultrules session.
One last note, the FaultRule session above must be in the target endpoint for it to catch the network error on the target side.
Cheers,
Ricardo
yeah!, working, its important where you put the FaultRule: "TargetEndpoint", i was wrong setting in "ProxyEndpoint"
my example:
`<TargetEndpoint name="TargetEndpoint">
<Description/>
<FaultRules>
<FaultRule name="HTTPError">
<Step>
<Name>Raise-Fault-5xx</Name>
<Condition>message.status.code = 503</Condition>
</Step>
</FaultRule>
</FaultRules>
...
`
I'm trying to create what I think should be a relatively simple business rule to operate over repeating elements in an XML schema.
Consider the following XML snippet (this is simplified with namespaces removed, for readability):
<Root>
<AllAccounts>
<Account id="1" currentPayment="10.00" arrearsAmount="25.00">
<AllCustomers>
<Customer id="20" primary="true" canSelfServe="false" />
<Customer id="21" primary="false" canSelfServe="false" />
</AllCustomers>
</Account>
<Account id="2" currentPayment="10.00" arrearsAmount="15.00">
<AllCustomers>
<Customer id="30" primary="true" canSelfServe="false" />
<Customer id="31" primary="false" canSelfServe="false" />
</AllCustomers>
</AllAccounts>
</Root>
What I want to do is to have two rules:
Set /Root/AllAccounts/Account[x]/AllCustomers/Customer[primary='true']/canSelfServe
= true IF arrearsAmount < currentPayment
Set /Root/AllAccounts/Account[x]/AllCustoemrs/Customer[primary='true']/canSelfServer
= false IF arrearsAmount >= currentPayment
Where [x] is 0...number of /Root/AllAccounts/Account records present in the XML.
I've tried two simple rules for this, and each rule seems to fire x * x times, where x is the number of Account records in the XML. I only want each rule to fire once for each Account record.
Any help greatly appreciated!
Thanks
Andrew
Make sure that the rules have the same Priority, just in case (I had issues with priorities before). I've also saw that at the Rules level, there is a property called maximum Execution Loop Depth, which assigns how many times can a rule be reevaluated. Try to put 1 there, if you're sure that your rules should only be evaluated once per payload. I hope this helps.
Check your predicate. The rule fires once for each matching combo of fields used in the predicate.