Play Framework running Asynchronous Web Calls times out - asynchronous

I have to perform three HTTP requests from a Play Application. These calls are directed towards three subprojects of the main application. The architecture looks like this:
main app
|
--modules
|
--component1
|
--component2
|
--component3
each component* is an individual sbt subprojects.
The main class in main app runs this Action:
def service = Action.async(BodyParsers.parse.json) { implicit request =>
val query = request.body
val url1 = "http://localhost:9000/one"
val url2 = "http://localhost:9000/two"
val url3 = "http://localhost:9000/three"
val sync_calls = for {
a <- ws.url(url1).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(query).get()
b <- ws.url(url2).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(a.body).get()
c <- ws.url(url3).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(b.body).get()
} yield c.body
sync_calls.map(x => Ok(x))
}
The components need to be activated one after the other, so they need to be a sequence. Each of the does a spark job. However when I call the service action I get this error:
[error] application -
! #71og96ol6 - Internal server error, for (GET) [/automatic] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]] [TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206)
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:160)
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:188)
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:98) │
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:1│
00)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:9│
9)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
Caused by: java.util.concurrent.TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.ReadTimeoutTimerTask.run(ReadTimeoutTimerTask.java:54)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
at java.lang.Thread.run(Thread.java:745)
and
HTTP/1.1 500 Internal Server Error
Content-Length: 4931
Content-Type: text/html; charset=utf-8
Date: Tue, 25 Oct 2016 21:06:17 GMT
<body id="play-error-page">
<p id="detail" class="pre">[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]</p>
</body>
I specifically set the Timeout for each call to Duration.Inf for the purpose of avoiding timouts. Why is this happening and how do I fix it?

Related

Karate fails to close earlier started Crome processes

In my project I use Karate 0.9.5, Oracle JDK8.
From time to time in our pipeline I see a problem. Karate fails to close earlier started Crome processes. Is there any solution to this problem?
I try to solved by explicitly call close() and quit() it didn't help. After running process was finished I found a couple of active chrome process on the server.
Here's a log:
12:51:45.227 preferred port 9222 not available, will use: 52436
12:51:47.524 request:
1 > GET http://localhost:52436/json
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Host: localhost:52436
1 > User-Agent: Apache-HttpClient/4.5.11 (Java/1.8.0_181)
12:51:47.584 response time in milliseconds: 58,31
1 < 200
1 < Content-Length: 361
1 < Content-Type: application/json; charset=UTF-8
[ {
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F",
"id": "4BD6A5C19E01B01D88995CD69367F81F",
"title": "",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F"
} ]
12:51:47.584 root frame id: 4BD6A5C19E01B01D88995CD69367F81F
12:51:47.632 >> {"method":"Target.activateTarget","params":{"targetId":"4BD6A5C19E01B01D88995CD69367F81F"},"id":1}
12:51:47.638 << {"id":1,"result":{}}
12:51:47.639 >> {"method":"Page.enable","id":2}
12:51:47.883 << {"id":2,"result":{}}
12:51:47.885 >> {"method":"Runtime.enable","id":3}
12:51:47.889 << {"method":"Runtime.executionContextCreated","params":{"context":{"id":1,"origin":"://","name":"","auxData":{"isDefault":true,"type":"default","frameId":"4BD6A5C19E01B01D88995CD69367F81F"}}}}
12:51:47.890 << {"id":3,"result":{}}
12:51:47.890 >> {"method":"Target.setAutoAttach","params":{"autoAttach":true,"waitForDebuggerOnStart":false,"flatten":true},"id":4}
12:51:47.892 << {"id":4,"result":{}}
12:52:02.894 << timed out after milliseconds: 15000 - [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}]
12:52:02.917 driver config / start failed: failed to get reply for: [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}], options: {type=chrome, showDriverLog=true, httpConfig={readTimeout=60000}, headless=true, target=null}
I think the easiest way to solve it is to extend root web driver interface com.intuit.karate.driver.Driver by adding additional method like getPID(). This method must return PID of launched process.
Can you log a bug and also provide some information as to what happened before this, are you using the Docker image etc. It looks like the previous Scenario did not shut down clearly.
But before that do you mind upgrading to 0.9.6.RC3 (just released) and trying that ? There have been a couple of tweaks to the life-cycle especially trying to close Chrome without waiting for a response. Also it may be worth adding a couple of changes a) Target.setAutoAttach without waiting and b) add configure option to not start Chrome if the port is in use

Transfer Encoding chunked with okhttp only delivers full result

I am trying to get some insights on a chunked endpoint and therefore planned to print what the server sends me chunk by chunk. I failed to do so so I wrote a test to see if OkHttp/Retrofit are working as I expect it.
The following test should deliver some chunks to the console but all I get is the full response.
I am a bit lost what I am missing to even make the MockWebServer of OkHttp3 sending me chunks.
I found this retrofit issue entry but the answer is a bit ambiguous for me: Chunked Transfer Encoding Response
class ChunkTest {
#Rule
#JvmField
val rule = RxImmediateSchedulerRule() // custom rule to run Rx synchronously
#Test
fun `test Chunked Response`() {
val mockWebServer = MockWebServer()
mockWebServer.enqueue(getMockChunkedResponse())
val retrofit = Retrofit.Builder()
.baseUrl(mockWebServer.url("/"))
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.client(OkHttpClient.Builder().build())
.build()
val chunkedApi = retrofit.create(ChunkedApi::class.java)
chunkedApi.getChunked()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe({
System.out.println(it.string())
}, {
System.out.println(it.message)
})
mockWebServer.shutdown()
}
private fun getMockChunkedResponse(): MockResponse {
val mockResponse = MockResponse()
mockResponse.setHeader("Transfer-Encoding", "chunked")
mockResponse.setChunkedBody("THIS IS A CHUNKED RESPONSE!", 5)
return mockResponse
}
}
interface ChunkedApi {
#Streaming
#GET("/")
fun getChunked(): Flowable<ResponseBody>
}
Test console output:
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS IS A CHUNKED RESPONSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
I expected to be more like (body "cut" every 5 bytes):
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS
IS A
CHUNKE
D RESPO
NSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
The OkHttp Mockserver does chunk the data, however it looks like the LoggingInterceptor waits until the whole chunks buffer is full then it displays it.
From this nice summary about HTTP streaming:
The use of Transfer-Encoding: chunked is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content.
With that in mind, we are dealing with 1 "request / response", which means we'll have to do our chunks retrieval before getting the entire response. Then pushing each chunk in our own buffer, all that on an OkHttp network interceptor.
Here is an example of said NetworkInterceptor:
class ChunksInterceptor: Interceptor {
val Utf8Charset = Charset.forName ("UTF-8")
override fun intercept (chain: Interceptor.Chain): Response {
val originalResponse = chain.proceed (chain.request ())
val responseBody = originalResponse.body ()
val source = responseBody!!.source ()
val buffer = Buffer () // We create our own Buffer
// Returns true if there are no more bytes in this source
while (!source.exhausted ()) {
val readBytes = source.read (buffer, Long.MAX_VALUE) // We read the whole buffer
val data = buffer.readString (Utf8Charset)
println ("Read: $readBytes bytes")
println ("Content: \n $data \n")
}
return originalResponse
}
}
Then of course we register this Network Interceptor on the OkHttp client.

Creating a rvest::html_session to a servr::httd http server

I'm working on a project that requires accessing webpages and I do this via
rvest::html_session(). For documentation and training I would like to set
up a reproducible example and have considered the following.
Use servr::httd(system.file("egwebsite", package = "<pkgname>"), daemon =
TRUE, browser = FALSE) to set up a simple HTTP server
Use rvest::html_session("http://127.0.0.1:4321") to set up the html
session.
However, the following simple example behaves differently on Linux (Debian 9)
and Windows 10. (I do not have easy access to OSx and have not tested on
that OS).
# On Windows
servr::httd(daemon = TRUE, browser = FALSE, port = 4321)
## Serving the directory /home/dewittpe/so/my-servr-question at http://127.0.0.1:4321
## To stop the server, run servr::daemon_stop("94019719908480") or restart your R session
R.utils::withTimeout(
{
s <- rvest::html_session("http://127.0.0.1:4321")
},
timeout = 3,
onTimeout = "error")
s
## <session> http://127.0.0.1:4321/
## Status: 200
## Type: text/html
## Size: 2352
servr::daemon_stop()
However, on my Linux box (Debian 9) I get the following
servr::httd(daemon = TRUE, browser = FALSE, port = 4321)
## Serving the directory /home/dewittpe/so/my-servr-question at http://127.0.0.1:4321
## To stop the server, run servr::daemon_stop("94019719908480") or restart your R session
R.utils::withTimeout(
{
s <- rvest::html_session("http://127.0.0.1:4321")
},
timeout = 3,
onTimeout = "error")
## Error: reached elapsed time limit
## Error in curl::curl_fetch_memory(url, handle = handle) :
## Operation was aborted by an application callback
That is, I am unable to create a html_session in the same R interactive
session that spawned the http server. If, however, I start a second R
session while the leaving the initial session running, I am able to create
the html_session without error.
What can I do so that I can create an html_session based on a servr::httd
HTTP server within the same R session on Linux?
Edit 1
If I add httr::verbose() to the html_session call I get the following when the session is created successfully. When the process hangs and fails to create the session the output stops on the last -> and none of the lines with <- are shown.
> s <- html_session("http://127.0.0.1:4321", httr::verbose())
-> GET / HTTP/1.1
-> Host: 127.0.0.1:4321
-> User-Agent: libcurl/7.52.1 r-curl/3.1 httr/1.3.1
-> Accept-Encoding: gzip, deflate
-> Accept: application/json, text/xml, application/xml, */*
->
<- HTTP/1.1 200 OK
<- Content-Type: text/html
<- Content-Length: 61303
<-
I have found a solution to my problem, run servr::httd in a subprocess. This solution requires the subprocess package.
First, a helper function R_binary will return the file path for the R binary on Windows or unix based OS.
R_binary <- function () {
R_exe <- ifelse (tolower(.Platform$OS.type) == "windows", "R.exe", "R")
return(file.path(R.home("bin"), R_exe))
}
Next, start R vanilla as a subprocess.
subR <- subprocess::spawn_process(R_binary(), c("--vanilla"))
Then start the HTTP server in the subprocess
subprocess::process_write(subR, 'servr::httd(".", browser = FALSE, port = 4321)\n')
## [1] 47
subprocess::process_read(subR)$stderr
## [1] "Serving the directory /home/dewittpe/so/my-servr-question at http://127.0.0.1:4321"
A quick test to show that there is communication between the active R session and the HTTP server:
session <- rvest::html_session("http://127.0.0.1:4321")
session
## <session> http://127.0.0.1:4321/
## Status: 200
## Type: text/html
## Size: 1054
And finally, kill the subprocess
subprocess::process_kill(subR)

Riak db returns 500 Internal server error

I seem to be having an issue with our riak-kv db v2.2.3, we are currently running 5 node cluster, out of nowhere last week suddenly requests to our db started returning 500 internal server error for all/almost every request (also might be valuable to mention our db has nearly doubled in size over the past 2 weeks.) At first, we thought the issue was in the code making the requests however after sshing into one of the nodes in the cluster and attempting a simple request to list buckets we saw this:
Command:
curl -i http://localhost:8098/buckets?buckets=true
Response:
HTTP/1.1 500 Internal Server Error
Vary: Accept-Encoding
Server: MochiWeb/1.1 WebMachine/1.10.9 (cafe not found)
Date: Sun, 10 Dec 2017 06:21:20 GMT
Content-Type: text/html
Content-Length: 1193
<html><head><title>500 Internal Server Error</title></head><body>
<h1>Internal Server Error</h1>The server encountered an error while
processing this request:<br><pre>{error,
{error,
{badmatch,{error,mailbox_overload}},
[{riak_kv_wm_buckets,produce_bucket_list,2,
[{file,"src/riak_kv_wm_buckets.erl"},{line,225}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,186}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,142}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,562}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,2,
[{file,"src/webmachine_mochiweb.erl"},{line,72}]},
{mochiweb_http,headers,5,
[{file,"src/mochiweb_http.erl"},{line,105}]}]}}</pre><P><HR>
<ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
After some more investigation into the issue, I pulled the logs on one of the riak nodes and saw this:
2017-12-10 03:54:58.654 [error]
<0.24342.271>#yz_solrq_helper:send_solr_ops_for_entries:301 Updating a
batch of Solr operations failed for index <<"attachment">> with error
{error,{other,{ok,"500",[{"Content-Type","application/json;
charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<"
{\"responseHeader\":{\"status\":500,\"QTime\":1}, error": {
"msg ": "Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.",
"trace": ""org.apache.solr.common.So lrException: Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.
at org.apache.solr.update. DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:169)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at or g.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd (DistributedUpdateProcessor.java:952)
at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692)
at org.apache.so lr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:141)
at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLo ader.java:106)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)
at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java :99)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at org.apache.solr.handler.RequestHandlerBase.handleRequ est(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilte r.java:777)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFi lter.java:207)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(Servle tHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHand ler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(Contex tHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHa ndler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedH andler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerCo llection.handle(HandlerCollection.java:154)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.han dle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.BlockingHttpConnec tion.handleRequest(BlockingHttpConnection.java:53)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at org.eclipse.jetty.s erver.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.ecli pse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at org.eclips e.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :608)
at org.eclip..." >>
}
}
}
After some googling some people said this error is
Caused by: java.lang.OutOfMemoryError: Java heap space
So I modified my riak config from:
search.solr.jvm_options = -d64 -Xms4g -Xmx4g -XX:+UseStringCache -XX:+UseCompressedOops
To:
search.solr.jvm_options = -d64 -Xms8g -Xmx8g -XX:+UseStringCache -XX:+UseCompressedOops
However I am still getting the same error any help is much appreciated.

Why is do_GET much faster than do_POST

Python's BaseHTTPRequestHandler has an issue with forms sent through post!
I have seen other people asking the same question (Why GET method is faster than POST?), but the time difference in my case is too much (1 second)
Python server:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import datetime
def get_ms_since_start(start=False):
global start_ms
cur_time = datetime.datetime.now()
# I made sure to stay within hour boundaries while making requests
ms = cur_time.minute*60000 + cur_time.second*1000 + int(cur_time.microsecond/1000)
if start:
start_ms = ms
return 0
else:
return ms - start_ms
class MyServer(BaseHTTPRequestHandler, object):
def do_GET(self):
print "Start get method at %d ms" % get_ms_since_start(True)
field_data = self.path
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
def do_POST(self):
print "Start post method at %d ms" % get_ms_since_start(True)
length = int(self.headers.getheader('content-length'))
print "Length to read is %d at %d ms" % (length, get_ms_since_start())
field_data = self.rfile.read(length)
print "Reading rfile completed at %d ms" % get_ms_since_start()
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8082), MyServer)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
Get request with python server is very fast
time curl -i http://0.0.0.0:8082\?one\=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 07:13:47 GMT
/?one=1curl http://0.0.0.0:8082\?one\=1 0.00s user 0.00s system 45% cpu 0.012 total
and on the server side:
Start get method at 0 ms
127.0.0.1 - - [18/Sep/2016 00:26:30] "GET /?one=1 HTTP/1.1" 200 -
Sent response at 0 ms
Instantaneous!
Post request when sending form to python server is very slow
time curl http://0.0.0.0:8082 -F one=1
prints
--------------------------2b10061ae9d79733
Content-Disposition: form-data; name="one"
1
--------------------------2b10061ae9d79733--
curl http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 0% cpu 1.015 total
and on the server side:
Start post method at 0 ms
Length to read is 139 at 0 ms
Reading rfile completed at 1002 ms
127.0.0.1 - - [18/Sep/2016 00:27:16] "POST / HTTP/1.1" 200 -
Sent response at 1002 ms
Specifically, self.rfile.read(length) is taking 1 second for very little form data
Post request when sending data (not form) to python server is very fast
time curl -i http://0.0.0.0:8082 -d one=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 09:09:25 GMT
one=1curl -i http://0.0.0.0:8082 -d one=1 0.00s user 0.00s system 32% cpu 0.022 total
and on the server side:
Start post method at 0
Length to read is 5 at 0
Reading rfile completed at 0
127.0.0.1 - - [18/Sep/2016 02:10:18] "POST / HTTP/1.1" 200 -
Sent response at 0
node.js server:
var http = require('http');
var qs = require('querystring');
http.createServer(function(req, res) {
if (req.method == 'POST') {
whole = ''
req.on('data', function(chunk) {
whole += chunk.toString()
})
req.on('end', function() {
console.log(whole)
res.writeHead(200, 'OK', {'Content-Type': 'text/html'})
res.end('Data received.')
})
}
}).listen(8082)
Post request when sending form to node.js server is very fast
time curl -i http://0.0.0.0:8082 -F one=1
prints:
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: text/html
Date: Sun, 18 Sep 2016 10:31:38 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Data received.curl -i http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 42% cpu 0.013 total
I think this is the answer to your problem: libcurl delays for 1 second before uploading data, command-line curl does not
libcurl is sending the Expect 100-Continue header, and waiting 1 second for a response before sending the form data (in the case of the -F command).
In the case of -d, it does not send the 100-Continue header, for whatever reason.

Resources