Does Heroku support chunked HTTP POST data? - http

I have an application that I am developing and I would like to send undefined amounts of data to the server using a chunked HTTP/1.1 POST request.
All I can see for now is that nothing seems to be sent to the server after the initial headers:
% cat ~/file.mp3 | curl -T - -X POST http://foo.com/source -v
* About to connect() to foo.com port 80 (#0)
* Trying xx.yy.zz.tt...
* Connected to foo.com (xx.yy.zz.tt) port 80 (#0)
> POST /source HTTP/1.1
> User-Agent: curl/7.30.0
> Host: foo.com
> Accept: */*
> Transfer-Encoding: chunked
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Date: Tue, 14 Jan 2014 14:57:19 GMT
< X-Powered-By: Express
< Content-Length: 13
< Connection: keep-alive
<
* Connection #0 to host foo.com left intact
Thanks, brah!
In the application (node express), if I log the response's "data" event, I see nothing else than:
2014-01-14T14:57:19.977995+00:00 heroku[router]: at=info method=POST path=/source host=foo.com fwd="xx.yy.zz.tt" dyno=web.1 connect=6ms service=73ms status=200 bytes=13
However, locally, the same logging gives:
...
[request] Got 360 bytes of data
[request] Got 16372 bytes of data
[request] Got 16372 bytes of data
[request] Got 16372 bytes of data
[request] Got 48 bytes of data
[request] Got 15974 bytes of data
[request] Got 398 bytes of data
...
Is there anything that I could be missing?

Related

How to respond 'Expect: 100-continue' header in shelf [server]

I am using curl cli to upload a file to my shelf server, request contains header Expect: 100-continue, now curl abort by curl: (52) Empty reply from server, the correct responding seems server should 2 status codes(100 and 200):
> POST /upload HTTP/1.1
> Host: 0.0.0.0:8000
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Length: 26252
> Content-Type: multipart/form-data; boundary=------------------------c9c343811f9aee88
> Expect: 100-continue
>
* Expire in 1000 ms for 0 (transfer 0x2f156ed0)
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Date: Thu, 22 Sep 2022 01:51:25 GMT
If client not using header Expect: to avoid this case, what should I do to implement '2-status' in shelf server end? Thanks!

Converting cURL request to http.Request

This cURL command works how it should:
curl -i -v http://localhost:81/hallo
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 81 (#0)
> GET /hallo HTTP/1.1
> Host: localhost:81
> User-Agent: curl/7.55.1
> Accept: */*
>
1 -1 0* Connection #0 to host localhost left intact
Now I tried to do the same http-request in my go-service like this:
request, err := http.NewRequest("GET", "http://localhost:81/" + url.QueryEscape("Hallo"), nil)
client := &http.Client{}
resp, err := client.Do(request)
If I run the go code (I tried it with a test) it only produces this error: I only get the error net/http: HTTP/1.x transport connection broken: malformed HTTP status code "-1".
(I initially tried http.Get(myurl). this produces the same http Request. the current code was generated by https://mholt.github.io/curl-to-go/ )
Can anyone help me to understand why this two request produce different results?
Sample request and response to a really server:
[#xxxx ~]# curl -v -i 10.103.118.178:40000/ready
* About to connect() to 10.103.118.178 port 40000 (#0)
* Trying 10.103.118.178...
* Connected to 10.103.118.178 (10.103.118.178) port 40000 (#0)
> GET /ready HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.103.118.178:40000
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< date: Fri, 25 Dec 2020 01:43:05 GMT
date: Fri, 25 Dec 2020 01:43:05 GMT
< content-length: 2
content-length: 2
< content-type: text/plain; charset=utf-8
content-type: text/plain; charset=utf-8
<
* Connection #0 to host 10.103.118.178 left intact
ok
You use -i to tell curl to include headers in the output. However, I cannot see any valid HTTP response header in your curl output. So probably your server is malfunctioning and did not make a valid HTTP response. So the Go program correctly reports about this, that it cannot interpret it as a valid HTTP header. (It tried to interpret it as an HTTP response header, but in where a status code should appear, it found -1 which is not a valid HTTP response code)

Meaning of curl response

I have downloaded the official consul image and I am running it behin an nginx load balancer.
When I send any http request using curl for ex
curl my-consul-http-endpoint:8500/v1/catalog/nodes I get the following back
* Trying 172.29.225.62...
* Connected to my-consul-http-endpoint.com (172.29.225.62) port 80 (#0)
> GET /v1/session/list HTTP/1.1
> Host: my-consul-http-endoint
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx
< Date: Tue, 19 Jul 2016 15:32:55 GMT
< Content-Type: text/html
< Content-Length: 18633
< Connection: keep-alive
< ETag: "56f72a9b-48c9"
Connection #0 to host my-consul-http-endpoint left intact
What does the response suggest? Did I get connected to the consul server? Did the server return an error and inturn the nginx load balancer returned a 502?
Deploy the consul client and connect to it with http://localhost:8500

POST from REST console perceived as GET by server

Im using both the chrome REST API console and Postman to send a post request to my server (running nginx and symfony2)
Its a very simple request, just simply posting to a URL with an empty body. If this request runs from another server via a HTTP request, it will register as POST. Trying to POST from the api consoles registers as GET in my nginx access logs and returns a 405 Method not allowed.
If I use curl I initially get a 301 Moved Permanently, so I have to use -L to follow redirects. Im not sure if this is standard Symfony or if it is effecting the request.
I've found some problems with the curl request, but am unsure how to resolve them.
$ curl -v -L -d "1EepG1a63X" xxx.io/api/convert_mov/
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 80 (#0)
> POST /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 10
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 10 out of 10 bytes
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.6.2
< Date: Tue, 15 Sep 2015 09:00:43 GMT
< Content-Type: text/html
< Content-Length: 184
< Connection: keep-alive
< Location: https://xxx.io/api/convert_mov/
<
* Ignoring the response-body
* Connection #0 to host xxx.io left intact
* Issue another request to this URL: 'https://xxx.io/api/convert_mov/'
* Switch from POST to GET
* Found bundle for host xxx.io: 0x7fcad9c14e70
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: xxx.io
* Server certificate: DigiCert SHA2 Secure Server CA
* Server certificate: DigiCert Global Root CA
> GET /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.6.2
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/5.5.25
< Cache-Control: no-cache
< Date: Tue, 15 Sep 2015 09:00:43 GMT
If you look closer, you will see your request is with HTTP. Then your server sends a redirect to your HTTPS site. And a 301 redirect does not preserve the request method. You MUST issue all your requests correctly against HTTPS.

cURL receives empty body response from Nginx server

I try to fetch HTTP content with cURL, but I only get an empty body in the reply:
[root#www ~]# curl -v http://www.existingdomain.com/
* About to connect() to www.existingdomain.com port 80 (#0)
* Trying 95.211.256.257... connected
* Connected to www.existingdomain.com (95.211.256.257) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.21.0 (x86_64-redhat-linux-gnu) libcurl/7.21.0 NSS/3.12.8.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.4
> Host: www.existingdomain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/0.8.53
< Date: Sat, 28 May 2011 15:56:23 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< X-Powered-By: PHP/5.3.3-0.dotdeb.1
<
* Connection #0 to host www.existingdomain.com left intact
* Closing connection #0
If I change the URL to another domain, like www.google.com, I get the content.
How can this be possible? And how to fetch content?
The server is free to send to the client whatever he likes, including nothing. While this is not exactly nice, there's little the client can do about this. You could
check the server logs to see if there is some problem which makes him so calm (given the server is under your control) or
try another client to see if the server does not like to talk to curl. You can then configure curl to mimic a regular web browser, if that helps

Resources