HTTP2 conflicting logs between puma and nginx - nginx

I'm confused by the different logs, one reporting http2, the other http 1.0.
I'm not sure which config file to cite. Or if it's a normal occurrence for puma's stdout redirect to show 1.0 for http version? Thank you.
nginx
==> /var/log/nginx/access.log <==
[10/Oct/2021:05:45:15 +0000] "GET /users/Ovbzv/quickrates/o5l05/payment/YabQ0/pending HTTP/2.0" 200
puma
==> app/log/stdout.log <==
[5626] 2604:a880:800:10::637:b005 - - [10/Oct/2021:05:45:15 +0000] "GET /users/Ovbzv/quickrates/o5l05/payment/YabQ0/pending HTTP/1.0" 200 - 0.0826

You are looking at two different connections:
the connection between the client and nginx (the reverse proxy); and
the connection between nginx and Puma;
In this specific case, each of these connections is using a different HTTP version, as indicated by the logs.
This is easily possible because HTTP/2 was specifically designed with some backwards compatibility in mind, allowing HTTP/2 to be converted to HTTP/1 when in need (and the same goes to converting HTTP/1 to HTTP/2).

Related

Private Azure Load Balancer Returning 400 Response Using NGINX

I have a brand new Azure Load Balancer configured in private mode and VMSS (Single Server) configured with nginx and the default site. Any time I try to use the load balancer nginx returns a 400 response but if I use the server directly I get a 200 response.
Further looking at the access logs I see this ->
xxx.xxx.xxx.xxx - - [30/Jun/2021:17:51:48 +0000] "\x00" 400 166 "-" "-"
xxx.xxx.xxx.xxx - - [30/Jun/2021:17:51:51 +0000] "GET / HTTP/1.1" 304 0 "-" "{Browser Info ...}"
When using the load balancer, the path is \x00 instead of / - I'm not sure what is going on here or where to look.
This was caused by a private link service configured for TCP proxy V2 that was configured on the Load Balancer

Unable to get nginx-vod-module plugin to work

My first time trying hands on nginx-vod-module or any video streaming for that matter.
I just want to play static mp4 videos which I place on the server but via hls instead of direct mp4 access. No actual live streaming
Q1. Am I right in understanding that a mp4 video which I place locally on my server, will automatically get broken down into segments for HLS?
My nginx installation is here: /opt/kaltura/nginx
The mp4 file is placed at /opt/kaltura/nginx/test/vid.mp4
In ../nginx/conf/server.conf, I have this:
location /hls/ {
alias test/;
vod hls;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 2000;
vod_bootstrap_segment_durations 4000;
include /opt/kaltura/nginx/conf/cors.conf;
}
location / {
root html;
}
Now, I am able to access the m3u8 file:
curl http://104.167xxxxx/hls/vid.mp4/index.m3u8
But when I try to open this file via VLC, I see these errors in errors.log:
*2020/10/31 15:00:08 [error] 12749#0: *60 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-1-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:08 [error] 12752#0: *61 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-2-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:09 [error] 12749#0: *62 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-3-v1.ts HTTP/1.1", host: "104.167. ..."
2020/10/31 15:00:10 [error] 12751#0: *63 mp4_parser_validate_stsc_atom: zero entries, client: 49.207 ..., server: ubuntu, request: "GET /hls/vid.mp4/seg-4-v1.ts HTTP/1.1", host: "104.167. ..."*
Q2: Is https must for this to work?
Q3: I dont see any /hls/vid.mp4 folder created anywhere on the server. Do I have to manually run ffmpeg separately to create the hls segments?
What wrong am I doing?
I'm no Kaltura expert, but hopefully this will help narrow some of your issues:
A1: Yes, Kaltura will package a solid mp4 to transport stream packets for HLS.
A2: No, this should work over plain http, have run many tests over http myself, does not require https.
A3: No, you don't need to manually run ffmpeg. I believe ffmpeg is a prerequisite, so it should be installed, but you do not need to chunk the mp4 yourself, the kaltura plugin will do this.
I've not seen the particular error message you posted, so afraid I can't help with that.

HAProxy 504 Timeout on Varnish Backends

I'm serving two websites through HAProxy and Varnish. There's a wiki site and a wordpress site. The wiki site works continuously and without problem. However the Wordpress site continuously shows a 504 error each time you reload the page.
If I spoof the wordpress site in my hosts file by using the IP of the varnish server instead of HAProxy the site comes back and starts working fine. It's only when wordpress is on haproxy that the site 504's.
I'd like to know how to turn on debug logging for HAProxy and also maybe get some help solving this problem.
This is all that I see in the logs for haproxy:
Apr 3 20:29:18 lb1.example.com haproxy[18501]: 52.21.231.226:52845 [03/Apr/2016:20:29:15.318] varnish-cluster varnish-cluster/varnish1 0/0/0/2786/2786 200 626 - - --NR 2/2/1/1/0 0/0 "HEAD / HTTP/1.1"
Apr 3 20:29:28 lb1.example.com haproxy[18501]: 61.174.10.22:18645 [03/Apr/2016:20:29:09.522] varnish-cluster varnish-cluster/varnish1 0/0/0/18206/19039 404 101736 - - --VN 0/0/0/0/0 0/0 "GET /groups/ HTTP/1.0"
Apr 3 20:29:34 lb1.example.com haproxy[18501]: 61.174.10.22:26372 [03/Apr/2016:20:29:31.045] varnish-cluster varnish-cluster/varnish1 0/0/0/3048/3048 301 549 - - --VN 0/0/0/0/0 0/0 "GET /members/pzwkathi09454/activity HTTP/1.0"
Apr 3 20:29:54 lb1.example.com haproxy[18501]: 61.174.10.22:27761 [03/Apr/2016:20:29:34.879] varnish-cluster varnish-cluster/varnish1 0/0/0/-1/20003 504 194 - - sHVN 0/0/0/0/0 0/0 "GET /activity/ HTTP/1.0"
And this is my config:
global
log 127.0.0.1 local2 debug
user root
group root
defaults
log global
retries 2
timeout connect 12000
timeout server 20000
timeout client 20000
listen varnish-cluster 0.0.0.0:80
mode http
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth admin:secret
balance roundrobin
option http-server-close
timeout http-keep-alive 3000
option forwardfor
option httplog
cookie PHPSESSID prefix
server varnish1 xx.xx.xx.xx:80 cookie s1 check
listen mysql-master-cluster
bind 0.0.0.0:3306
mode tcp
option mysql-check user haproxy_check
balance roundrobin
server mysql-master-1 xx.xx.xx.xx:3306 check
server mysql-master-2 xx.xx.xx.xx:3306 check
I'd appreciate any advice you'd have in solving the 504 error with HAProxy!

CUDA_CLIP Http Header?

While watching some log files the other day i saw a client suddenly start generating these error messages and for the life of me i cant figure out whats causing them.
2012/03/06 13:16:56 [info] 14212#0: *2018230 client sent invalid header line:
"CUDA_CLIIP: 10.3.68.20" while reading client request headers, client: 72.162.16.3,
server: <oursever.com>, request: "GET /images/101431.jpg HTTP/1.0", host: "<ourhost>",
referrer: "http://<ourserver.com>/<valid_url>"
This went on for more than 30 minutes before i dropped that ip in iptables. Fyi this is an nginx install.
The only information i could find online was this page:
http://nemesis.te-home.net/Projects/AdvOR-Help/
You probably just need to turn underscores_in_headers on;

How can I measure my (SAMP) server's bandwidth usage?

I'm running a Solaris server to serve PHP through Apache. What tools can I use to measure the bandwidth my server is currently using? I use Google analytics to measure traffic, but as far as I know, it ignores file size. I have a rough idea of the average size of the pages I serve, and can do a back-of-the-envelope calculation of my bandwidth usage by multiplying page views (from Google) by average page size, but I'm looking for a solution that is more rigorous and exact.
Also, I'm not trying to throttle anything, or implement usage caps or anything like that. I'd just like to measure the bandwidth usage, so I know what it is.
An example of what I'm after is the usage meter that Slicehost provides in their admin website for their users. They tell me (for another site I run) how much bandwidth I've used each month and also divide the usage for uploading and downloading. So, it seems like this data can be measured, and I'd like to be able to do it myself.
To put it simply, what is the conventional method for measuring the bandwidth usage of my server?
This depends on your setup. If you have a (near-)dedicated physical interface for your web server you could gather stats straight from the interface.
Methods to do this could include SNMP (try net-snmp) or "ifconfig", combined with RRDTool or simple logging to flat files.
An alternative is using the Apache log, which could look like this:
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET / HTTP/1.1" 200 1456
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET /apache_pb.gif HTTP/1.1" 200 2326
192.168.101.155 - - [17/Apr/2005:20:39:19 -0700] "GET /favicon.ico HTTP/1.1" 404 303
192.168.101.155 - - [17/Apr/2005:20:39:42 -0700] "GET /index.html.ca HTTP/1.1" 200 1663
192.168.101.155 - - [17/Apr/2005:20:39:42 -0700] "GET /apache_pb.gif HTTP/1.1" 304 -
192.168.101.155 - - [17/Apr/2005:20:39:43 -0700] "GET /favicon.ico HTTP/1.1" 404 303
192.168.101.155 - - [17/Apr/2005:20:40:01 -0700] "GET /apache_pb.gif HTTP/1.1" 304 -
192.168.101.155 - - [17/Apr/2005:20:40:09 -0700] "GET /apache_pb.gift HTTP/1.1" 404 306
192.168.101.155 - - [17/Apr/2005:20:40:09 -0700] "GET /favicon.ico HTTP/1.1" 404 303
The last number is the amount of bytes transferred, excluding the header(!). See Apache Log Docs.
I am just guessing, but I think the usual approach is to use the same tools and services that are used to deliver QoS features. QoS == Quality of Service. Somewhere on the server itself, or on the network routers around the server, there will be services enabled that measure the size of the packets flowing out of your server. These same services can be used to limit the amount of bandwidth for customers that need to have such limitations enforced. I have not heard of an application that can be run on your server that measures bandwidth. I think it should be possible to create such an app, but that's not the usual way that such measurements are collected. I suspect this answer will end up not being solaris-specific.

Resources