Units of Work and Backout in DataPower - ibm-datapower

I Have set the configuration as below
Units of Work : 1
Automatic Backout: on.
Backout Threshold: 3
Backout Queue Name: Queue Name is given.
So according to this settings , since threshold value is 3 and in case of failure, there should be 4 transaction in the probe?
can you please confirm
Thanks
Vathsa

No, only one as it is the same transaction in DP but three transport retires.

Related

Rebalance issue with spring kafka max.poll.interval.ms, max.poll.records and idleTimeBetweenPolls

I am seeing the continuous rebalancing on my application. My application is developed in batch mode and here are configuration properties which have been added.
myapp.consumer.group.id= cg-id-local
myapp.changefeed.topic= test_topic
myapp.auto.offset.reset=latest
myapp.enable.auto.commit=false
myapp.max.poll.interval.ms=300000
myapp.max.poll.records= 20000
myapp.idle.time.between.polls=240000
myapp.concurrency = 10
container factory:
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory(poSummaryCGID));
factory.setConcurrency(poSummNoOfConsumers);
factory.setBatchListener(true);
factory.setAckDiscarded(true);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setIdleBetweenPolls(idleTimeBetweenPolls);
I have few Questions here:
I have setup the maximum record count per poll(4 min) is 20000 and we have 10 partitions in a TOPIC. Since i setup the concurrency as 10, so 10 consumers will up and running and each will listen to 1 partition. My question here is, does the record count will be split across all the consumers like each consumer can handle 2000 records ?
The max.poll.interval.ms has been setup with 5 min. I am sure that the consumer will process 2000(if my above understanding is correct) records in a given poll interval(4 min) which is less than max.poll.interval.ms which has upper bound limit. But not sure why rebalancing is happening? are there any other configuration properties i need to setup ?
Help would be greatly appreciated!!
Tried with these configurations:
myapp.max.poll.interval.ms=600000
myapp.max.poll.records= 2000
myapp.idle.time.between.polls=360000
myapp.max.poll.interval.ms=300000
myapp.max.poll.records= 2000
myapp.idle.time.between.polls=300000
myapp.max.poll.interval.ms=300000
myapp.max.poll.records= 2000
myapp.idle.time.between.polls=180000
EDIT FIX :
We should always
myapp.max.poll.interval.ms >
(myapp.idle.time.between.polls + myapp.max.poll.records processing time).
No. max.poll.records is per consumer, not per topic or container.
If you have concurrency=10 and 10 partitions you should reduce max.poll.records to 2000 so that each consumer gets a max of 2000 per poll.
The container will automatically reduce the idle between polls so that the max.poll.interval.ms won't be exceeded, but you should be conservative with these properties (max.poll.records and max.poll.interval.ms) such that it will never be possible to exceed the interval.

How to get actual number of channel in asterisk 13?

In asterisk 1.4 number of channel was specified in chan->name.
e.g. number 62:
asterisk 1.4 ZAPTEL: Zap/62-1
How to get actual number of channel in asterisk 13 in c-language?
e.g. in chan->name this number of span only.
asterisk 13 DAHDI: DAHDI/I2/102-1
Here is what R.Mudget say about extentons.conf:
You can use the AMI action DAHDIShowChannels to get the current channel mapping.
There is an AMI event that you can look for:
Event: DAHDIChannel Channel: name Uniqueid: id DAHDISpan: 5
DAHDIChannel: 23
It is generated whenever a call is assigned to a B channel or a call moves to a different B channel.
There is also the CHANNEL() dialplan function:
CHANNEL(dahdi_channel)
CHANNEL(dahdi_span)
CHANNEL(dahdi_type)
The DAHDIChannel event and CHANNEL() function are mentioned in the UPGRADE.txt file.
Richard
But how do I get an actual number of channel in c-language API?
Simplest way answer this question is read source code(writed in c/c++) of chan_dahdi and see how dahdi_channel variable is set in YOUR dahdi/asterisk combination.
You also can use ami from c/c++, but that is not optimal.
In general you should not see number of channel in channel-name unless you setuped one-channel-one-span.

BizTalk error "Error: 1 (Field level error) Position in Field: 2 Data Value: 0.0000"

I am getting the following error "Error: 1 (Field level error) SegmentID: MOA Position in TS: 13 Data Element ID: C51602 Position in Segment: 2 Position in Field: 2 Data Value: 0.0000"
I tried using the option in agreement tab allowing Leading Zeros (see below)
Any other things I could try and see. I am really stuck here.
To be clear about this, your trading partner is technically sending you invalid EDI. The guidance is specifically that non-significant digits be suppressed so "0.0000" is out of compliance.
So, the correct way to resolve this is to contact your trading partner and have them correct their output.
If they are unable or unwilling to do this, then yes, you have to disable this rule in the Agreement.
Note, to be sure you're using the correct Agreement, you should disable the Fallback Settings.
I solved this issue by going to EDI Fallback settings and in Validation allowing Leading and Trailing zeroes policy to be Allowed.
There are 2 type of settings: EDI Fallback and X12 Fallback settings. So we need to enable the option in both of them.

How to get Graphite to simply count counters, not time-rate them

I'm using Graphite and Collectd to monitor my server. In particular, I'm using the tail pluggin to count failed SSH logins. I'm using a counter for this metric, so expect to see 1, 2, 3, 0, etc... for data points. However, what I'm seeing is 0.1, 0.2, 0.3, 0, etc... It seems to me like Graphite is providing counts-per-second. I say this because my retention policy is one data point every 10 seconds for two hours. So 1 failed login per 10 seconds = 0.1 per second. I'm looking at this in a graph. It looks like this:
Furthermore, when I scale out to the next retention level, the numbers get adjusted accordingly: so 1 failed login which was shown as 0.1 is now shown as much less than this: 0.017 or something.
I don't think this is related to the aggregation method used: even the finest data is off. How can I get Graphite to treat this metric as a pure, raw, counter?
Here's my storage-schemas.conf (the retention policy):
[my_server]
pattern = .*
retentions = 10s:2h,1m:2d,30m:400d
Here's my configuration of the collectd tail plugin:
<Plugin "tail">
<File "/var/log/auth.log">
Instance "auth"
<Match>
Regex "sshd[^:]*: Failed password"
DSType "CounterInc"
Type "counter"
Instance "sshd-invalid_user"
</Match>
</File>
</Plugin>
And here's my configuration of the write_graphite pluggin (which sends data to graphite):
<Plugin write_graphite>
<Node "my_server_name">
Host "localhost"
Port "2003"
Protocol "tcp"
LogSendErrors true
Prefix "collectd."
#Postfix ""
StoreRates true
AlwaysAppendDS false
EscapeCharacter "_"
</Node>
</Plugin>
I tried setting StoreRates false for the write_graphite pluggin, but this didn't work. It did change the behaviour: when I performed a single failed SSH login, that metric shows as 1. However, it didn't drop back down to 0. When I performed two more failed logins, the metric pops up to 3.
Also of interest: I've also loaded the users pluggin which simply shows the number of users logged in and it's working great: shows 1 when I SSH in, two when I SSH in again, and back to 1 when I exit one SSH. For both settings of StoreRates. So it seems like what I want is possible somehow. Maybe not with the tail pluggin though.
The SSH logins with StoreRates false along with correct behaviour for Users Logged in can be seen in these graphs:
Any ideas? Thanks,
You are asking the system to count the number of events. And this is exactly what it's doing: it's counting the number of failed logins since its startup. Whether you're using StoreRates or not simply changes the way that information is displayed: as a rate or as the raw counter. A counter may never decrease! What you're actually asking for is a counter that resets itself upon reading: count the number of failed logins since the last time collectd checked.
As it happens the ABSOLUTE data source type in rrdtool can be used to achieve this, but that won't help you.
Step back, and think about what you're trying to achieve: the number of failed logins per second seems to me like a perfectly sane metric!
Although swissunix's answer is very helpful, to achieve the behaviour I was looking for, I ended up using Logster instead of Collectd. With Logster, you write the bit of code that parses the file as well as the bit that returns the metric. So although dividing a count by the time is common with Logster, you don't have to do this if you don't want to: there's lots of flexibility.
I've put my parsers here: https://github.com/camlee/logster-parsers
If you set StoreRates to false, in graphite you can apply the derivative function to the ever-increasing counter to get your rate of increase per retention interval, which would match your requirement.
E.g. in your example of reporting 1 failed login, then 2, you saw the values 1 and 3. The derivative is 1 and 2: the failed logs per interval that graphite tracks.

Do UNIX message queues maintain order of messages?

If, under UNIX/Linux/BSD/OSX, I use this sequence of APIs in Application A:
msgq_id = mq_open( full_queue_name,
O_RDWR | O_CREAT,
S_IRWXU | S_IRWXG,
&msgq_attr);
mq_send(msgq_id, ptrData1, len1, 0);
mq_send(msgq_id, ptrData2, len2, 0);
...
and this sequence of events in Application B:
mqd_t open_res = mq_open(full_queue_name, O_RDONLY);
...
mq_receive(...)
mq_receive(...)
... do I have a guarantee that the message queue maintains the order of the messages?
That is, that Application B will receive the data from ptrData1 first, and then the data from ptrData2?
From man mq_send on linux (emphasis added):
The msg_prio argument is a non-negative integer that specifies the priority of this message. Messages are placed on the queue in decreasing order of priority, with newer messages of the same priority being placed after older messages with the same priority.
So yes, you have a guarantee.
You get message that is oldest one of highest priority. So if you send all with same priority, you always receive them in same order.

Resources