Datapower timeout is in what unit? - ibm-datapower

Can someone please let me know if the timeout we set is in seconds or milli seconds. In setting it is no where mention.
By default it is 120 but with no units even if i click where IBM gives explanation is not having the unit mentioned there.

Timeout is in seconds.
In general, you are much better off using the documentation at the IBM Knowledge Center instead of relying on the WebGui help.
http://www-01.ibm.com/support/knowledgecenter/

Related

What exactly is timekey_wait parameter in Fluentd?

I'm new in EFK. I have a problem with showing logs in Kibana. I already resolved, but I'm not sure of my approach.
Problem: Kibana shows log after 10 minutes when Elasticsearch is restarted.
I studied in https://docs.fluentd.org/configuration/buffer-section
And I found out that if I configure timekey_wait parameter in buffer section is 0s, Kibana shows logs without delay.
The problem is resolved, but I have still concerned about timekey_wait parameter.
Are there others impacted by the change?
Why is timekey_wait needed? Please give me an example of the necessity of it.
Thank you for your time!
1 & 2 ) According to the documentation by using timekey_wait parameter fluentd waits the specified amount of time, before writing chunks. By this way delayed log lines that needs to be in the same log chunk won't be missed.
If your timekey is 60m and timekey_wait is 10m, now the chunks will be written after 70m not 60m.
If you don't have delayed log lines to be come it becomes less important.In one of my implementations, I use flush_interval parameter. That way timekey is not needed. (buffer chunks will be flushed after this time)

Why the time-out values are so small?

I'm not sure if this is a pure stackoverflow relevant question. It is related to general design practice. Since I cannot think of another relevant stack exchange site, posting it here.
In the general design practice of converting an async call to sync one, we use a time-out and wait for the results. While, this may not exactly a good practice from the point of view of responsiveness, it definitely makes the implementation easier.
I have seen many such implementations and often noticed that the developers tend to give a very small time-out value. I can understand that the people may have the need of a responsive system in mind when they did this. But many of these applications I have seen are very data critical ones where the loss of data is very bad. So, it is always better to wait more and try to get as much data instead of timing out early and giving an error message to the user. Now, the situations where the server failing to give data or the client unable to reach server etc are rare. In those situations, I expect the a large time-out for such waits. After all, these time-outs don't mean that the wait will definitely last until the given time-out value; the timeout value is only an upper limit. So, I have always arguing for higher values here. But I see the use of low values in more and more places and now I'm getting confused if really there is something else in this practice that I don't understand.
So, my question is : Are there any arguments, other than the need for responsiveness to implement a very small time-out for waiting?
As always, the right decision depends on the real-life data.
The timeout should be proportional to the time it usually takes to complete an operation successfully.
Sending a UDP message for example could take between 1 - 50 milliseconds so a timeout of 100 milliseconds is more than reasonable however copying a file over the wire could take minutes or more so a 100 millisecond timeout is laughable.
There are pros and cons to both short and long timeouts so it's a tradeoff. Longer timeouts use more resources (tasks, threads, memory, etc.) for the same amount of work while short timeouts, as you mentioned, may result in loss of data.
In conclusion, you need to set a configurable timeout that sounds reasonable and then figure out whether you timeout too many operations in production or the other way around and calibrate accordingly.

Detect session hang and kill it

I have an asp.net page that runs certain algorithm and returns it's output. I was wondering what will happen and how to handle a case where the algorithm due to a bug goes into infinite loop. It will hog the cpu and other sessions will be served very slowly.
I would love to have a way to tell IIS, if processing Algo.aspx takes more than 5 seconds, kill it or something like that.
Thanks in advance
There is no such thing in IIS. What you can do instead is to perform the work in a background thread and measure the time it takes to complete this background task and simply kill the thread if the wait time is longer than expected.
You may take a look at the WaitHandle.WaitOne method which allows you to specify a timeout for waiting for a particular event to be signaled from a background thread for example.
Set the ScriptTimeout property. It will abort the page if the time is exceeded. It only works when the debugging property is set to true, though.

videoDisplay metadatareceiced ready totaltime

I'm building a video player that run's FLVs and cannot overcome the fact that flex foes not fire the metadataReceived event sometimes. Sometimes it does and sometimes it does not.
therefore the total time of the FLV remains -1.
I understand its a known bug , I'm researching about it for a long time now , but could not find a good workaround by now, found one that says to set the buffer time to 0 and try that, but it's also does not work.
Does anyone has a good workaround for this?
What is your bufferTime property set to when you initialize the player? Try setting it to 0?

Flex: time how long HTTPService takes to load?

I am loading some XML with HTTPService in Flex. It is taking longer than I would like to load. So I want to do some trouble shooting, but in order to tell what is making a difference I need to be able to time the requests and how long they are taking.
What is the best way to time an http service to see how long it took from HTTPService.send() to HTTPService.result
Thanks!
This is a duplicate, go here to see my previous answer to the question:
In Flex, is there a way to determine how long an HTTPService.send() call takes round-trip?
I would recommend Charles Proxy ( http://www.charlesproxy.com/download/ ). I use it daily as a main tool for monitoring Flex traffic (it even supports AMF). There's a free version with some annoyances (I think it only runs 30 min. each time you run it, and displays a popup from time to time).

Resources