lighttpd not sending via HTTP/2 - http

I'm on Raspbian Stable on Raspberry Pi 2, and I've compiled myself lighttpd version 1.4.59, that is supposed to have HTTP/2 enabled by default.
After installing it, everything seems ok:
pi#Raspi:~ $ lighttpd -V
lighttpd/1.4.59 - a light and fast webserver
Event Handlers:
+ select (generic)
+ poll (Unix)
+ epoll (Linux)
- /dev/poll (Solaris)
- eventports (Solaris)
- kqueue (FreeBSD)
- libev (generic)
Network handler:
+ linux-sendfile
- freebsd-sendfile
- darwin-sendfile
- solaris-sendfilev
+ writev
+ write
- mmap support
Features:
+ IPv6 support
+ zlib support
- zstd support
- bzip2 support
- brotli support
+ crypt support
- OpenSSL support
- mbedTLS support
- NSS crypto support
- GnuTLS support
- WolfSSL support
- Nettle support
+ PCRE support
- MySQL support
- PgSQL support
- DBI support
- Kerberos support
- LDAP support
- PAM support
- memcached support
- FAM support
- LUA support
- xml support
- SQLite support
- GDBM support
But it seems that the pages are still transmitted via "http/1.1". I was expecting "h2" when getting a simple PHP page from the server:
HTTP/1.1 200 OK
Content-type: text/html; charset=UTF-8
Content-Length: 99966
Date: Tue, 09 Feb 2021 23:51:52 GMT
Server: lighttpd/1.4.59

#TheUnexpected: if your client makes an HTTP/1.1 request, then lighttpd will handle it as an HTTP/1.1 request. You can use mod_accesslog to log the request to an access log, or debug.log-request-header = "enable" to have lighttpd log to error log, if your client is making HTTP/1.1 requests, or if HTTP/2 has been negotiated.
See man curl, specifically --http2 and --http2-prior-knowledge command line options, even when your target is http instead of https.

Related

HTTP2 conflicting logs between puma and nginx

I'm confused by the different logs, one reporting http2, the other http 1.0.
I'm not sure which config file to cite. Or if it's a normal occurrence for puma's stdout redirect to show 1.0 for http version? Thank you.
nginx
==> /var/log/nginx/access.log <==
[10/Oct/2021:05:45:15 +0000] "GET /users/Ovbzv/quickrates/o5l05/payment/YabQ0/pending HTTP/2.0" 200
puma
==> app/log/stdout.log <==
[5626] 2604:a880:800:10::637:b005 - - [10/Oct/2021:05:45:15 +0000] "GET /users/Ovbzv/quickrates/o5l05/payment/YabQ0/pending HTTP/1.0" 200 - 0.0826
You are looking at two different connections:
the connection between the client and nginx (the reverse proxy); and
the connection between nginx and Puma;
In this specific case, each of these connections is using a different HTTP version, as indicated by the logs.
This is easily possible because HTTP/2 was specifically designed with some backwards compatibility in mind, allowing HTTP/2 to be converted to HTTP/1 when in need (and the same goes to converting HTTP/1 to HTTP/2).

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

Will I be able to use CURL to get HTTP/2 headers?

Right now I use curl -I to retrieve headers.
Will sites adopt a different way of serving headers with HPACK in the upcoming adoption of HTTP/2 by browsers that will render my use of the curl command ineffective?
Yes, you can use curl to see and send HTTP headers with HTTP/2 just as you do with HTTP/1.
curl supports HTTP/2 and it is implemented as a sort of translation layer. It means it shows and "pretends" that headers work 1.1 style. It shows headers as text and it sends headers in callbacks like they were done with 1.1. We made it this way to make scripts and applications get a very smooth and basically invisible transition path to HTTP/2 with curl.
Internally that is of course done by decompressing received headers before showing them, and showing them before compressing them when sending them.
I believe it depends on curl version. HTTP/2 was added in curl 7.36.x IIRC ? not all distros would have that version ?
This is with curl 7.41.0 over HTTP/2 against https://google.com
curl --http2 -I -v https://google.com
* Rebuilt URL to: https://google.com/
* Trying 173.194.123.1...
* Connected to google.com (173.194.123.1) port 443 (#0)
* ALPN, offering h2-14, http/1.1
* ALPN, server accepted to use h2-14
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com
* start date: 2015-03-11 16:13:43 GMT
* expire date: 2015-06-09 00:00:00 GMT
* subjectAltName: google.com matched
* issuer: C=US; O=Google Inc; CN=Google Internet Authority G2
* SSL certificate verify ok.
* Using HTTP2
edit: correction, curl --http2 needs nghttp2 compiled for it to work https://nghttp2.org/
curl --version
curl 7.41.0 (x86_64-unknown-linux-gnu) libcurl/7.41.0 OpenSSL/1.0.2b zlib/1.2.8 nghttp2/0.7.8-DEV
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets

How to differentiate request coming from command-line and browsers?

To check whether it is a cli or http request, in PHP this method php_sapi_namecan be used, take a look here. I am trying to replicate that in apache conf file. The underlying idea is, if the request is coming from cli a 'minimal info' is served, if the request is from browsers then the users are redirected to different location. Is this possible?
MY PSEUDO CODE:
IF (REQUEST_COMING_FROM_CLI) {
ProxyPass / http://${IP_ADDR}:5000/
ProxyPassReverse / http://${IP_ADDR}:5000/
}ELSE IF(REQUEST_COMING_FROM_WEB_BROWSERS){
ProxyPass / http://${IP_ADDR}:8585/welcome/
ProxyPassReverse / http://${IP_ADDR}:8585/welcome/
}
Addition: cURL uses host of different protocols including http, ftp & telnet. Can apache figure out if the request is from cli or browser?
For as far as I know, there is no way to find the difference using apache.
if a request from the command-line is set up properly, apache can not make a difference between command-line and browser.
When you check it in PHP (using php_sapi_name, as you suggested), it only checks where php itself was called from (cli, apache, etc.), not where the http request came from.
using telnet for the command line, you can connect to apache, set the required http-headers and send the request as if you were using a browser(only, the browser sets the headers for you)
so, i do not think apache could differentiate between console or browser
The only way to do this is to test the user agent sent in the header of the request but this information can be easily changed.
By default every php http request looks like this to the apache server:
192.168.1.15 - - [01/Oct/2008:21:52:43 +1300] "GET / HTTP/1.0" 200 5194 "-" "-"
this information can be easily changed to look like a browser, for example using this
ini_set('user_agent',
'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3');
the http request will look like this
192.168.1.15 - - [01/Oct/2008:21:54:29 +1300] "GET / HTTP/1.0" 200 5193
"-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3"
At this moment the apache will think that the received connection come from a windows firefox 3.0.3.
So there is no a exact way to get this information.
You can use a BrowserMatch directive if the cli requests are not spoofing a real browser in the User-Agent header. Else, like everyone else has said, there is no way to tell the difference.

HTTP streaming / chunked responses on Heroku with clojure

I'm making a clojure web app that streams data to clients using chunked HTTP responses. This works great when I run it locally using foreman, but doesn't work properly when I deploy it to Heroku.
A minimal example exhibiting this behaviour can be found on my github here. The frontend (in resources/index.html) performs an AJAX GET request and prints the response chunks as they arrive. The server uses http-kit to send a new chunk to connected clients every second. By design, the HTTP request never completes.
When the same code is deployed to Heroku, the HTTP connection is closed by the server immediately after the first chunk is sent. It seems to be Heroku's routing mesh which is causing this disconnection to occur.
This can also be seen by performing the GET request using curl:
$ curl -v http://arcane-headland-2284.herokuapp.com/stream
* About to connect() to arcane-headland-2284.herokuapp.com port 80 (#0)
* Trying 54.243.166.168...
* Adding handle: conn: 0x6c3be0
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x6c3be0) send_pipe: 1, recv_pipe: 0
* Connected to arcane-headland-2284.herokuapp.com (54.243.166.168) port 80 (#0)
> GET /stream HTTP/1.1
> User-Agent: curl/7.31.0
> Host: arcane-headland-2284.herokuapp.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Date: Sat, 17 Aug 2013 16:57:24 GMT
* Server http-kit is not blacklisted
< Server: http-kit
< transfer-encoding: chunked
< Connection: keep-alive
<
* transfer closed with outstanding read data remaining
* Closing connection 0
curl: (18) transfer closed with outstanding read data remaining
The time is currently Sat Aug 17 16:57:24 UTC 2013 <-- this is the first chunk
Can anybody suggest why this is happening? HTTP streaming is supposed to be supported in Heroku's Cedar stack. The fact the code runs correctly using foreman suggests it is something in Heroku's routing mesh causing it to break.
Live demo of the failing project: http://arcane-headland-2284.herokuapp.com/
This was due to a bug in http-kit which will be fixed shortly.
https://devcenter.heroku.com/articles/request-timeout may be relevant: "long-polling" requests like yours have to send data every 55 seconds or be terminated.

Resources