How to increase air client http request time out? - apache-flex

There was a similar question asked long time ago(flex air httpservice stream error and timing out), but I didn't see an answer. I am using air SDK3.1 and need to increase air client request time out due to long processing time of the server. How do I achieve this?

Request time out is a server side setting. You need to setup the time out on your server. Again this is server specific. It could be anywhere.
Also you can try to set this to value less than 0. Some one has this issue here.

Related

Timeout vs no response from server, how can I separate these?

This question is regarding a bot of mine which's primary focus is scraping.
The path is mapped out correctly and it does what it needs to do.
Rate limits are tested and I am certain this is not a factor, if it was and where it was we received actual responses.
However, the webpage(s) I am trying to scrape seem to have build in a kind of weird/ unfamiliar security manner, something that I haven't came across before. And here I am wondering, how it's executed and how I deal with it appropriately.
While the scraper/bot is doing it's thing, sending requests getting responses, at random times it will encounter this what I suspect is a security measure. There are simply no responses back from the server, not a 4xx error or any at all.
At first sight the proxies just appear dead, but that's not it, because they are not. The proxies work just fine, and manually I can just browse the page on them, no issues here.
The server just stops giving responses.
Now to find a workaround for this, I would need to be able to tell the difference between a timeout (for my proxies) and a no response. They appear the same, but are not.
Does anyone have insight on this problem, maybe there is a genius way to separate those that I am not aware of.
Now to find a workaround for this, I would need to be able to tell the difference between a timeout (for my proxies) and a no response. They appear the same, but are not.
A timeout is if the server does not respond within a specific time. No response means, that the server either closes the connection either before the timeout occurs or that it will close the connection after the timeout occurred without sending anything back.
The first case can be easily detected by the connection close before timeout. If you want to detect instead if the server will close the connection without response only after your current timeout then your only option is to extend the timeout. There is nothing in the server which will indicate that the server will close the connection without response at some future time.
And since your only connection is with the proxy there is no real way to detect if the problem is at the proxy or the server. Your only hope might be to set your timeout waiting for the proxy larger then the timeout the proxy has waiting for the server. This way you'll maybe get a response from the proxy indicating that the connection to the server timed out.
They appear the same, but are not.
They are the same. There is no difference. A read timeout means that data didn't arrive within the timeout period. For whatever reason. TCP doesn't know, and can't tell you. At the C level, recv() returned -1 with errno == EAGAIN/EWOULDBLOCK. That's all the information there is.
What you are asking is tantamount to 'data didn't arrive: where didn't it arrive from?' It's not a meaningful question.

SignalR long polling is making request frequently

I'm using SignalR in Mono. Its working fine, but it is always using long polling. I'm still fine in going with long polling. But as far as my understanding regarding long polling, browser makes a request to the server and the server will hold that request. Once the server has to respond, it will send a respond to that request. If the request is timedout then the client will again send a request to the server. Please correct me if my understanding is wrong.
But in my Signalr implementation, my browser is making request every 15 seconds frequently. Not sure that the timeout for the signalr long polling is 15 seconds and if yes, i don't know a way to change the timeout. Or is this not a normal behaviour? Please help.
Update 1:
Please find the log entries,
To be precise, it is taking exactly 17 seconds for SignalR to make the next request. I can see a message that 'Long polling complete' from the logs. I assume that it is coming after the given request timesout. My question is, is there a way to increase this timeout?

SignalR duplicating responses

I'm using SignalR with Redis as a message bus on a server that sits behind an Nginx proxy for load balancing. I used SignalR's PersistentConnection class to write a simple chat program that broadcasts messages to users belonging to the same certain group. Users are added to a group in OnConnectedAsync, removed in OnDisconnectAsync, and the user-to-group mapping is deterministic.
Currently, the client side falls back to long polling for whatever reason (I'm not entirely sure why), and whenever the client sets up a new connection after waiting for and receiving a response, seemingly at random, the server will sometimes respond to the new connection immediately with the previous response, despite there having only been one POST.
The message ID's tend to differ by exactly one, (the smaller ID coming first), with the rest of the response remaining the same. I logged some debug info and am quite positive that my override of OnReceivedAsync is sending one response per one request. I tried the same implementation without the Redis message bus, and got the same problem. Running locally (with long polling) however yielded good results so I suspect that the problem might be with the way the message bus might be buffering messages to refresh clients who might not be caught up, and some weird timing with the cutting/setting up of connections with the Nginx load balancer, but beyond that, I am very much at a loss.
Any help would be appreciated.
EDIT: Further investigation reveals that duplication occurs at somewhat regular intervals of approximately 20-30 seconds. I'm led to believe that the message expiration in the message bus might have something to do with the bug.
EDIT: Bug can be seen here: http://tinyurl.com/9q5t3va
The server is simply broadcasting a counter being sent by the client. You will notice some responses are duplicated every 20 or so.
Reducing the number of worker processes in the IIS (6.0) Server Manager from 2 to 1 solved the problem.

Can someone interpret these apache bench results, is there something that stands out?

Below is a apache bench run for 10K requests with 50 concurrent threads.
I need help understanding the results, does anything stand out in the results that might be pointing to something blocking and restricting more requests per second?
I'm looking at the connection time section, and see 'waiting' and 'processing'. It shows the mean time for waiting is 208, and the mean time to connect is 0 and processing is 208..yet the total is 208. Can someone explain this to me as it doesn't make much sense to me.
Connect time is time it took ab to establish connection with your server. you are probably running it on same server or within LAN, so your connect time is 0.
Processing time is total time server took to process and send complete response.
Wait time is time between sending request and receiving 1st byte of response.
Again, since you are running on same server, and small size of file, your processing time == wait time.
For real benchmark, try ab from multiple points near your target market to get real idea of latency. Right now all the info you have is the wait time.
This question is getting old, but I've run into the same problem so I might as well contribute an answer.
You might benefit from disabling either TCP nagle on the agent side, or ACK delay on the server side. They can interact badly and cause an unwanted delay. Like me, that's probably why your minimum time is exactly 200ms.
I can't confirm, but my understanding is that the problem is cross-platform since it's part of the TCP spec. It might be just for quick connections with a small amount of data sent and received, though I've seen reports of issues for larger transfers too. Maybe somebody who knows TCP better can pitch in.
Reference:
http://en.wikipedia.org/wiki/TCP_delayed_acknowledgment#Problems
http://blogs.technet.com/b/nettracer/archive/2013/01/05/tcp-delayed-ack-combined-with-nagle-algorithm-can-badly-impact-communication-performance.aspx

IIS6 HTTP request prioritization

I am submitting POST requests to an external server running IIS6. This is a time critical request where I want to ensure that my request is processed at a specific time (say 10:00:00 AM). No earlier. And I want to ensure that at that specific time, my request is assigned the highest priority over other requests. Would any of this help:
Sending most of the message a few seconds early and sending the last byte or so a few milliseconds prior to 10:00:00. Not sure if this will help as I will be competing with other requests that come in around that time. Will IIS assign a higher priority to my request based on how long I am connected?
Anything that I can add to the message header to tell the server to queue my request and process only at a specific time?
Any known hacks that I can leverage?
No - HTTP is not a real time protocol. It usually runs on top of TCP/IP which is not a real time protocol. While you can get near real-time behaviour out of such an architecture its far from simple - don't take my word for it - go read the source code for xntpd.
Having said that you give no details of the actual level of precision you require - but your post implies that it could be up to a second - which is a very long time for submitting a request to a webserver. On the other hand, scheduling such an event to fire client side with this level of accuracy is very difficult - I've not tried measuring the accuracy of the scheduler on MSWindowsNT but elsewhere I'd only expect it to be accurate to about 5 minutes. So you'd need to schedule the job to start 5 minutes early then sleep for 10 milliseconds at a time until the target time rolls around.
But then again, thinking about why you need to run any job with any sort of timing accuracy makes me think that you're trying to solve the problem the wrong way.
C.
It sounds like you need more of a scheduler system then trying to use http. HTTP is a stateless protocol, you send a request to IIS, you get a response.
What you might want to consider is taking that request, and then storing the information you require somewhere (database). Then using some sort of scheduler (cronjobs, scheduled tasks) you action that information at the desired time.
What you want, you probably can't achieve with IIS, it's not what it is designed to do.

Resources