Spring Integration - POST call - how do I deal with timeout - http

WHAT I HAVE:
My code flow is like this:
(1) construct request
(2) POST to URL
(3) Write results to output directory
#Bean
public IntegrationFlow validateRequest() {
return IntegrationFlows.from("REQUEST_CHANNEL")
.channel(c -> c.executor(new SimpleAsyncTaskExecutor()))
.handle(requestModifier, "constructRequest")
.handle(Http.outboundGateway("POST_URL", restTemplate)
.httpMethod(HttpMethod.POST)
.mappedRequestHeaders("ab*", "TraceabilityID", authenticator.getToken())
.charset("UTF-8")
.expectedResponseType(Response.class))
.handle(outputWriter, "writeToDir");
}
WHAT I NEED:
The timeout for the POST_URL is 20000 ms.
My code tries to write the response before it receives a timeout and gives a null pointer exception.
Which of the below approaches should I use?
-> Add wait() to Http.outboundGateway so that the thread waits for atleast 20 s for a response.
-> Make the whole thread sleep for 20 sec. Can you please give me an example for this?

Related

Async server does not process requests while a request is stuck

I am new to GRPC so please let me know if I am doing something wrong here. I am looking at the greeter_async_server.cc example code. This seems to work fine for normal requests but I wanted to simulate a request getting stuck on the server so I added a sleep in the processing loop. I added this right before Finish is called on the responder so that it was in the actual processing logic of the request. While the server thread is sleeping it will not accept any new requests until the thread is free. I attempted to create another client request while the original request on the server is sleeping but the grpc server would not process the request. The client seemed to be stuck until the server came out of the sleep.
I also broke this process into debugger as well but the only request I saw was the one that was sleeping. The other threads were waiting on the completion queue.
I am new to grpc so if I am doing this wrong please let me know what I need to do to handle request while another request is stuck.
void Proceed() {
if (status_ == CREATE) {
// Make this instance progress to the PROCESS state.
status_ = PROCESS;
// As part of the initial CREATE state, we *request* that the system
// start processing SayHello requests. In this request, "this" acts are
// the tag uniquely identifying the request (so that different CallData
// instances can serve different requests concurrently), in this case
// the memory address of this CallData instance.
service_->RequestSayHello(&ctx_, &request_, &responder_, cq_, cq_,
this);
} else if (status_ == PROCESS) {
// Spawn a new CallData instance to serve new clients while we process
// the one for this CallData. The instance will deallocate itself as
// part of its FINISH state.
new CallData(service_, cq_);
// The actual processing.
std::string prefix("Hello ");
reply_.set_message(prefix + request_.name());
Sleep((DWORD)-1);
// And we are done! Let the gRPC runtime know we've finished, using the
// memory address of this instance as the uniquely identifying tag for
// the event.
status_ = FINISH;
responder_.Finish(reply_, Status::OK, this);
} else {
GPR_ASSERT(status_ == FINISH);
// Once in the FINISH state, deallocate ourselves (CallData).
delete this;
}
}

What does the BLE-status code "-402" mean?

I have a GarminIQ-project. Therefore I make a request. Since yesterday I get the error code -402.
According to https://developer.garmin.com/downloads/connect-iq/monkey-c/doc/Toybox/Communications/OAuthMessage.html#responseCode-instance_method negative values stand for BLE-responses, positive are the http-requestCode. Does anybody know what -402 stands for?
I am using the Connect IQ SDK 3.0.10.
I tried to find out, what the error code is meaning. But I haven't found a list with code "-402" or "402"
Down below are the two code snippets that are used for the request. The argument url is our api-url. This works fine in a browser.
//This function makes the request
function makeRequest(url) {
jsonFile = Communications.makeJsonRequest(url, {}, {}, method(:onReceive));
}
//This is the callback method that is called, when data have arrived
function onReceive(responseCode, data){
if (responseCode == 200) {
notify.invoke(1, data);
}else {
System.println(responseCode);
notify.invoke(0, "Failed to load\nError: "+responseCode.toString());
}
}
If you look at the API docs for the Communications module, you will see that -402 is the error code returned when the results sent back from your request were too large.
NETWORK_RESPONSE_TOO_LARGE = -402
Most devices have a very limited amount of memory and so you may need to run your request through some sort of proxy server to make the request and then trim down the results to only what you require back before sending the data to your device.

why main thread don't return response immediately when I call async method?

I have write a test code in a new web application as below:
public ActionResult Index()
{
Logger.Write("start Index,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
MyMethodAsync(System.Web.HttpContext.Current.Request);//no await and has warning
Logger.Write("end Index,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
return View();
}
private async Task MyMethodAsync(HttpRequest request)
{
Logger.Write("start MyMethodAsync,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
await SomeMethodAsync(request);
Logger.Write("end MyMethodAsync,threadId:" + System.Threading.Thread.CurrentThread.ManagedThreadId);
}
And here is the log:
2017-11-15 19:55:31.904 start Index,threadId:35
2017-11-15 19:55:31.919 start MyMethodAsync,threadId:35
2017-11-15 19:55:31.919 end Index,threadId:35
2017-11-15 19:55:53.324 end MyMethodAsync,threadId:46
The client brower will receive response at about 2017-11-15 19:55:32 and it accord with my understanding. In my actual project production environment,it writes the same log as above, However, the client brower received response in about 22 seconds later at about 2017-11-15 19:55:54. It seems that even the main thread complete the work, the main thread do not return the response until the new thread complete the work.
I have debug this problem serveral days. Could you help me please?
async-await does not change the HTTP protocol. The request goes to the server, the server produces a response and sends it to the client.
It just changes how ASP.NET requests are processed by ASP.NET.
And it doesn't make the request handling faster. Quite the contrary.
But it does use more thread pool threads and makes the server more responsive under heavy load.

Sending TCP data without recieving (boost asio)

I'm working my way through boost's asio tutorial. I'm looking into their chat example. More specifically, I'm trying to split their chat client from a sender+receiver, to just a sender and just a receiver, but I'm seeing some behaviour that I can't explain.
The setup consists of:
boost::asio::io_service io_service;
tcp::resolver::iterator endpoint = resolver.resolve(...);
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
boost::asio::async_connect(socket, endpoint, bind(handle_connect, ... ));
The sending portion effectively conisists of:
while (std::cin.getline(str))
io_service.post( do_write, str );
and
void do_write (string str)
{
boost::asio::async_write(socket, str, bind( handle_write, ... ));
}
The receive section consists of
void handle_connect(...)
{
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
void handle_read(...)
{
std::cout << read_msg_;
boost::asio::async_read(socket, read_msg_, bind(handle_read, ...));
}
If I comment out the content of handle_connect to isolate the send portion, my other client (compiled using the original code) does not receive anything. If I revert, then comment out the content of handle_read, my other client only receives the first message.
Why is it necessary to call async_read() in order to be able to post() an async_write()?
The full unmodified code is linked above.
The problem here is that, your io_service is running out of work and stops processing requests even before you start sending your chat messages.
If you comment out the body of handle_connect, then the only work it had to do was to dispatch the handle_connect handler and then execute it once the connection was done.
std::size_t scheduler::run(asio::error_code& ec)
{
.....
mutex::scoped_lock lock(mutex_);
std::size_t n = 0;
for (; do_run_one(lock, this_thread, ec); lock.lock())
if (n != (std::numeric_limits<std::size_t>::max)())
++n;
return n;
}
So, you have to provide it with something in it's operation queue. This was done with handle_read_header handler in the original code as this handler would always be in the need of servicing till the client gets something from the server.
You can do what you want to do by providing work to the io_service.
asio::io_context io_context;
asio::io_context::work wrk(io_context); // make `run` run forever
tcp::resolver resolver(io_context);
tcp::resolver::results_type endpoints = resolver.resolve(argv[1], argv[2]);
chat_client c(io_context, endpoints);
asio::thread t(boost::bind(&asio::io_context::run, &io_context));

ASP.Net httpruntime executionTimeout not working (and yes debug=false)

We just recently noticed that executionTimeout has stopped working on our website. It was definitely working ~last year ... hard to say when it stopped.
We are currently running on:
Windows-2008x64
IIS7
32bit binaries
Managed Pipeline Mode = classic
Framework version = v2.0
Web.Config has
<compilation defaultLanguage="vb" debug="false" batch="true">
<httpRuntime executionTimeout="90" />
Any hints on why we are seeing Timetaken all the way up to ~20 minutes. Would compilation options for DebugType (full vs pdbonly) have any effect?
datetime timetaken httpmethod Status Sent Received<BR>
12/19/10 0:10 901338 POST 302 456 24273<BR>
12/19/10 0:18 1817446 POST 302 0 114236<BR>
12/19/10 0:16 246923 POST 400 0 28512<BR>
12/19/10 0:12 220450 POST 302 0 65227<BR>
12/19/10 0:22 400150 GET 200 180835 416<BR>
12/19/10 0:20 335455 POST 400 0 36135<BR>
12/19/10 0:57 213210 POST 302 0 51558<BR>
12/19/10 0:48 352742 POST 302 438 25802<BR>
12/19/10 0:37 958660 POST 400 0 24558<BR>
12/19/10 0:06 202025 POST 302 0 58349<BR>
Execution timeout and time-taken time two different things. Although, the size of the discrepancy is troubling.
time-taken includes all of the network time in the request/response (under certain conditions.). The network transfer time easily outstrips the amount of time a request really takes. Though, normally, I'm used to just seconds of difference not minutes.
Execution timeout refers only to the amount of time the worker process spent processing the request; which is just a subset of time-taken. It only applies if the debug attribute is set to false; which it looks like you have.
Of course, assuming the first request you listed took the full 90 seconds of allowed time out, that still leaves 13.5 minutes left in the time-taken window to transfer essentially 24k of data. That sounds like a serious network issue.
So, either you have a serious transport issue or there is another web.config file somewhere in the tree where the requests are being processed that either sets debug to true or increases the execution timeout to something astronomical.
Another possibility is that the page itself has either the debug attribute set or it's own timeout values.
I have a theory but I'm not sure how to prove it. I've done something similar to cbcolin and logged the time when the request starts from within the BeginRequest event handler. Then when the request times out (1 hour later in our case) it is logged in the database and a timestamp recorded.
So here is the theory: ASP.NET only counts time that the thread is actually executing, not time that it is asleep.
So after BeginRequest the thread goes to sleep until the entire POST body is received by IIS. Then the thread is woken up to do work and the executionTimeout clock starts running. So time spent in the network transmission phase is not counted against the executionTimeout. Eventually the site wide Connection Timeout is hit and IIS closes the connection, resulting in an exception in ASP.NET.
BeginRequest and even PreRequestHandlerExecute all get called before the POST body is transferred to the web server. Then there is a long gap before the request handler is called. So it may look like .NET had the request for 30 minutes but the thread wasn't running that long.
I'm going to start logging the time that the request handler actually starts running and see if it ever goes over the limit I set.
Now as to control how long a request can stay in the transmittions phase like this on a per URL basis I have no idea. On a global level we can set minBytesPerSecond in webLimits for the application. There is no UI for it that I can find. This should kick ultra slow clients in the transmission phase.
That still wont solve the problem for DoS attacks that actually send data.
I came across this article 2 days ago when I had the same problem. I tried everything, it worked on my local machine but did not work on the production server. Today, I have a workaround that fixes the problem and would like to share. Microsoft seems to not apply timeout to IHttpAsyncHandler and I take advantage of that. On my system, I only have 1 handler that is time-consuming, so this solution works for me.
My handler code looks like this:
public class Handler1 : IHttpAsyncHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{ }
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData)
{
//My business logic is here
AsynchOperation asynch = new AsynchOperation(cb, context, extraData);
asynch.StartAsyncWork();
return asynch;
}
public void EndProcessRequest(IAsyncResult result)
{ }
}
And my helper class:
class AsynchOperation : IAsyncResult
{
private bool _completed;
private Object _state;
private AsyncCallback _callback;
private HttpContext _context;
bool IAsyncResult.IsCompleted { get { return _completed; } }
WaitHandle IAsyncResult.AsyncWaitHandle { get { return null; } }
Object IAsyncResult.AsyncState { get { return _state; } }
bool IAsyncResult.CompletedSynchronously { get { return false; } }
public AsynchOperation(AsyncCallback callback, HttpContext context, Object state)
{
_callback = callback;
_context = context;
_state = state;
_completed = false;
}
public void StartAsyncWork()
{
_completed = true;
_callback(this);
}
}
In this approach, we actually do not do anything asynchronously. AsynchOperation is just a fake async task. All of my business logic is still executed on the main thread which does not change any behavior of the current code.

Resources