Hi Is there a way to break the nginx module request handler into phases to simulate async behavior for third party library used in custom module?
Say I have a library that talks to another server and this library internally manages connections and request processing with backend server. nginx wont be able to generate events on the underlying connections. So approach I can think of is, the nginx handler creates a thread to run the blocking operation and at this point I want to return the control to nginx so it can proceed. Now when the library API call returns I want to post an event in nginx event loop so that it can resume the handler and send back the response to client.
I guess this should be possible? Any pointers on how?
Thanks in advance,
Related
I am creating a web application which has a websocket handler. On each successful connection i am appending the websocket handler object to a list.
Another handler class called PostResultHandler accepts POST data. This PostResultHandler will be called by a background process which sends json data. Once this json data is received by PostResultHandler, I want write to the list of websockets.
Currently I am just iterating through the list of websockets and writing the json data to it. But i think it may be a blocking call and the background process which calls the PostResultHandler will be blocked until the result is written to all websockets.
Is there any way to make this piece of code non blocking so that the background process will keep running without any delay
The Tornado examples folder includes a chat demo with websockets, it simply does:
for waiter in cls.waiters:
try:
waiter.write_message(chat)
except:
logging.error("Error sending message", exc_info=True)
This is not a blocking call. The message is buffered on the server immediately and your code continues executing.
Best possible option is to add a callback on ioloop to send the data for each websocket handler instance. You can do something like following.
tornado.ioloop.IOloop.instance().add_callback(partial(websocket_handler_instance.write_message, msg))
For example there is a web app called ababab.com , for their purpose they are calling few http requests from inside their application.
As a end user I can able to see what requets/urls going to ababab.com but is there any way to know what http calls they are making in inside of their application.
Thanks
How can i implement an asynchronous callback scenario in Apigee.
For example i need to call a host and the host may take some time to process response. Once the response is ready that needs to be delivered to the caller/client.
Thanks in Advance
Regards
Can not claim that it is a standard way of doing this, however here is a design:
Assumption: The target host must support registering a call back URL.
When the client calls Apigee proxy, Apigee proxy in the middle can generate a unique callback URL and send to the target as a parameter when making the API request. In the meantime it would have to block the client ( and start polling an internal storage).
The callback URL would be itself be a proxy in Apigee that receives the response from the target side and then updates an entry in Apigee persistence store, which is being polled by the first proxy.
If the callback happens within say x seconds, then the apigee proxy can send the response back to the client. If it does not happen within that time than it can send back some error.
To implement you can use Key Value Map or Caching policy in apigee for the transient persistence store. And for blocking the client and polling the persistence store use java or javascript policies
Take a look at https://github.com/apigee/api-platform-samples/tree/master/sample-proxies/async-callout and see if that helps. This sample makes the requests to the target, stores the response handles in the JS "session", goes away to do other things, and then retrieves the handles from the "session" and checks the responses.
Is it possible to call (Post to) a Method in a ASPX (Code behind) page via HTTP endpoint in Sql server 2008/2005.
HTTP endpoints are first of all, deprecated. No new development should rely on them. Second, they have been, even for their brief lifetime, exclusively for incomming HTTP requests. HTTP endpoints can only serve a SOAP response, can never make an outbound call (GET or POST, mathers not). And last, all the points #gbn already made: never block a transaction on an outbound call. Do the business validation from a process call, before insert into the DB.
At worst, if no validation is possible before the insert, queue up a request for validation and place the data in a 'pending' state in the trigger, then commit. Then an external process can scan that queue and service the validation requests. You can Use Tables as Queues.
And no, CLR Web calls from triggers are no a solution (I'm sure some will mention them)...
We have various web-services which accept requests, but there are periods in the day for some of the services where we do not want to accept requests. (I won't go into why at the moment but it is a requirement).
In these cases I'd like to abort the request as it is being received and am wondering the best way to do this and where is the most appropriate place in the ASP.NET pipeline.
I am currently returning back a status of ServiceUnavailable in the BeginRequest in the global.aspx.
Is the the correct way to implement this type of functionality or is there a better alternative. (_webState is an internal variable we use for the current state of the service)
if (_webState != WebStates.Runnable)
{
Response.StatusCode = (int)System.Net.HttpStatusCode.ServiceUnavailable;
Response.Write("<p>The Service is not available to accept requests</p>");
Response.End();
}
I think the correct way is to create a custom HTTP module as an extension of ASP.NET request handling pipeline. In this module you will be able to refuse connections according to needed strategy. ServiceUnavailable is a standard way of going, we've done this in same way.
But here is exist one very big "but" =). Going this way you can disable only requests that is targeted to ASP.NET application, if you need to refuse all request made to particular application you need to write your custom tool that will stop corresponding ApplicatonPool.