cURL not returning after receiving 404 response - http

I have a webserver I have made in C (school project)
It seems to handle 200 response codes fine. However when i try to use cURL to test 404, it doesn't return/exit.
For example:
$ curl localhost:5555/q.txt
...
If I use verbose.
$ curl -v localhost:5555/q.txt
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5555 (#0)
> GET /q.txt HTTP/1.1
> User-Agent: curl/7.40.0
> Host: localhost:5555
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 404 Not Found
< Date: Fri, 14 Aug 2015 11:16:32 AEST
< Connection: close
< Server: myserver/jnd
<
The full message I am sending back is:
HTTP/1.0 404 Not Found\r\nDate: Fri, 14 Aug 2015 11:18:10 AEST\r\nConnection: close\r\nServer: myserver/jnd\r\n\r\n
Am I missing something here? A new-line or other formatting?

It seems cURL will just wait until you close the connection.
So if I just close(fd) afterwards, it works.

Related

Converting cURL request to http.Request

This cURL command works how it should:
curl -i -v http://localhost:81/hallo
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 81 (#0)
> GET /hallo HTTP/1.1
> Host: localhost:81
> User-Agent: curl/7.55.1
> Accept: */*
>
1 -1 0* Connection #0 to host localhost left intact
Now I tried to do the same http-request in my go-service like this:
request, err := http.NewRequest("GET", "http://localhost:81/" + url.QueryEscape("Hallo"), nil)
client := &http.Client{}
resp, err := client.Do(request)
If I run the go code (I tried it with a test) it only produces this error: I only get the error net/http: HTTP/1.x transport connection broken: malformed HTTP status code "-1".
(I initially tried http.Get(myurl). this produces the same http Request. the current code was generated by https://mholt.github.io/curl-to-go/ )
Can anyone help me to understand why this two request produce different results?
Sample request and response to a really server:
[#xxxx ~]# curl -v -i 10.103.118.178:40000/ready
* About to connect() to 10.103.118.178 port 40000 (#0)
* Trying 10.103.118.178...
* Connected to 10.103.118.178 (10.103.118.178) port 40000 (#0)
> GET /ready HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.103.118.178:40000
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< date: Fri, 25 Dec 2020 01:43:05 GMT
date: Fri, 25 Dec 2020 01:43:05 GMT
< content-length: 2
content-length: 2
< content-type: text/plain; charset=utf-8
content-type: text/plain; charset=utf-8
<
* Connection #0 to host 10.103.118.178 left intact
ok
You use -i to tell curl to include headers in the output. However, I cannot see any valid HTTP response header in your curl output. So probably your server is malfunctioning and did not make a valid HTTP response. So the Go program correctly reports about this, that it cannot interpret it as a valid HTTP header. (It tried to interpret it as an HTTP response header, but in where a status code should appear, it found -1 which is not a valid HTTP response code)

Prevent squid from changing http header

I've a problem with squid Version 3.5.21 and I hope, someone could help me. Unfortunately I wasn't able to find a proper solution..
When I'm accessing http://www.google.de via curl and without squid the http header looks like this:
curl -v http://www.google.de
* Rebuilt URL to: http://www.google.de/
* Hostname was NOT found in DNS cache
* Trying 172.217.17.67...
* Connected to www.google.de (172.217.17.67) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.37.0
> Host: www.google.de
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 17 Sep 2018 12:32:19 GMT
...
When using squid it looks like this:
curl -x http://127.0.0.1:3128 -v http://www.google.de
* Rebuilt URL to: http://www.google.de/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0)
> GET http://www.google.de/ HTTP/1.1
> User-Agent: curl/7.37.0
> Host: www.google.de
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 OK
< Date: Mon, 17 Sep 2018 12:19:22 GMT
...
The GET seems to be rewritten. Now some services in the internet blocking requests with the full URL in the GET request.
How can I configure squid (if it's possible) to let the GET untouched?
Thanks in advance!!
Regards, Matthias

Meaning of curl response

I have downloaded the official consul image and I am running it behin an nginx load balancer.
When I send any http request using curl for ex
curl my-consul-http-endpoint:8500/v1/catalog/nodes I get the following back
* Trying 172.29.225.62...
* Connected to my-consul-http-endpoint.com (172.29.225.62) port 80 (#0)
> GET /v1/session/list HTTP/1.1
> Host: my-consul-http-endoint
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< Server: nginx
< Date: Tue, 19 Jul 2016 15:32:55 GMT
< Content-Type: text/html
< Content-Length: 18633
< Connection: keep-alive
< ETag: "56f72a9b-48c9"
Connection #0 to host my-consul-http-endpoint left intact
What does the response suggest? Did I get connected to the consul server? Did the server return an error and inturn the nginx load balancer returned a 502?
Deploy the consul client and connect to it with http://localhost:8500

POST from REST console perceived as GET by server

Im using both the chrome REST API console and Postman to send a post request to my server (running nginx and symfony2)
Its a very simple request, just simply posting to a URL with an empty body. If this request runs from another server via a HTTP request, it will register as POST. Trying to POST from the api consoles registers as GET in my nginx access logs and returns a 405 Method not allowed.
If I use curl I initially get a 301 Moved Permanently, so I have to use -L to follow redirects. Im not sure if this is standard Symfony or if it is effecting the request.
I've found some problems with the curl request, but am unsure how to resolve them.
$ curl -v -L -d "1EepG1a63X" xxx.io/api/convert_mov/
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 80 (#0)
> POST /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 10
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 10 out of 10 bytes
< HTTP/1.1 301 Moved Permanently
< Server: nginx/1.6.2
< Date: Tue, 15 Sep 2015 09:00:43 GMT
< Content-Type: text/html
< Content-Length: 184
< Connection: keep-alive
< Location: https://xxx.io/api/convert_mov/
<
* Ignoring the response-body
* Connection #0 to host xxx.io left intact
* Issue another request to this URL: 'https://xxx.io/api/convert_mov/'
* Switch from POST to GET
* Found bundle for host xxx.io: 0x7fcad9c14e70
* Trying xx.76.9.82...
* Connected to xxx.io (xx.76.9.82) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: xxx.io
* Server certificate: DigiCert SHA2 Secure Server CA
* Server certificate: DigiCert Global Root CA
> GET /api/convert_mov/ HTTP/1.1
> Host: xxx.io
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.6.2
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-Powered-By: PHP/5.5.25
< Cache-Control: no-cache
< Date: Tue, 15 Sep 2015 09:00:43 GMT
If you look closer, you will see your request is with HTTP. Then your server sends a redirect to your HTTPS site. And a 301 redirect does not preserve the request method. You MUST issue all your requests correctly against HTTPS.

cURL receives empty body response from Nginx server

I try to fetch HTTP content with cURL, but I only get an empty body in the reply:
[root#www ~]# curl -v http://www.existingdomain.com/
* About to connect() to www.existingdomain.com port 80 (#0)
* Trying 95.211.256.257... connected
* Connected to www.existingdomain.com (95.211.256.257) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.21.0 (x86_64-redhat-linux-gnu) libcurl/7.21.0 NSS/3.12.8.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.4
> Host: www.existingdomain.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/0.8.53
< Date: Sat, 28 May 2011 15:56:23 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< X-Powered-By: PHP/5.3.3-0.dotdeb.1
<
* Connection #0 to host www.existingdomain.com left intact
* Closing connection #0
If I change the URL to another domain, like www.google.com, I get the content.
How can this be possible? And how to fetch content?
The server is free to send to the client whatever he likes, including nothing. While this is not exactly nice, there's little the client can do about this. You could
check the server logs to see if there is some problem which makes him so calm (given the server is under your control) or
try another client to see if the server does not like to talk to curl. You can then configure curl to mimic a regular web browser, if that helps

Resources