I was surprised to find that I couldn't find any information on logging the request protocol in an nginx access log. I usually share a server block for both HTTP (80) and HTTPS (443) traffic, and use a combined access log for both. I'd like to indicate in each line in the access log if the request was over HTTP or HTTPS.
Is this possible, or do I need to use a separate server block for HTTPS and specify a separate access log for SSL?
It's a bit hidden in the docs, but you can use any of the common variables. This includes $scheme.
You can combine server blocks like:
server {
listen 80;
listen 443 default_server ssl;
# other directives
}
> nginx http/https config docs
For customizing the log file output you can use the "log_format" directive to define your own access log setup.
> nginx access_log docs
Related
I am trying to host a website that only allow user to enter the website http://website.com:1234 using direct http2, and do not allow any users to enter via http0.9, http1.0, http1.1 or upgrade from either one of them. Additionally, I do not want my website to be able to serve https.
I have tried to configure using:
server {
listen 1234 ssl http2
listen [::]:1234 ssl http2
etc
etc
}
as well as
server {
listen 1234 http2
listen [::]:1234 http2
etc
etc
}
It does not work like how I wanted to, could anyone be able to help me?
Sorry but this is not technically possible.
No browsers currently support http2 without encryption and unencrypted http2 connections require an upgrade from http 1.1 according to the spec
See here and here for more info
Edit:
So there is a lot of problems with the Google OAuth not just the ones in the original question. But I'll still leave it at the bottom as an example of one of them.
New question:
Google OAuth API keeps showing error: Not a valid origin for the client: <some_url> even when i added the site to Authorized JavaScript origins
Mostly in regards to localhost and public IPs without domain name yet.
Original question:
Hi, I am having an error when trying to sign in using Google OAuth2.0 API in browser.
The error says:
{
error: "idpiframe_initialization_failed",
details: ""Not a valid origin for the client: https://localhost has not been whitelisted for client ID <CLIENT_ID>.apps.googleusercontent.com. Please go to https://console.developers.google.com/ and whitelist this origin for your project's client ID.""}
Where <CIENT_ID> is actual client id provided by google api
I have all these origins enabled:
I have all these ports opened on my nginx server
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 8000 default_server;
listen [::]:8000 default_server;
listen 5000 default_server;
listen [::]:5000 default_server;
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
listen 4343 ssl default_server;
listen [::]:4343 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
include snippets/snakeoil.conf;
Here with colors:
All of them point to the same website and all of them work except:
http://localhost
http://localhost:80
https://localhost
https://localhost:443
With the other ports I manage to Sign In but not with those essential ports.
I have seen many similar questions answered with deleting the cache but the same behavior can be seen on Vivaldi (chromium) and Firefox (used for the first time before asking this question)
I think i found the problem.
Every time i forgot to put the location into whitelisted origins, something happenned and my browser cache remembered that this origin is not in the whitelist and even after adding the location into whitelist the error kept showing.
I had to clear the browsing cache (i also deletet application cache just in case) to get the behavior back.
I'll be testing my site on a different browser from now on so i can delete browsing data freely.
Also you cannot have for some reason IPs in the whitelist so you have to use some shannanigans for example .xip.io which is just a domain that when you access <some_ip>.xip.io:<some_port> it returns ip which is <some_ip>:<some_port> (you can ommit :<some_port> so that it will return <some_ip>
And localhost was just a big mess
Also i didn't manage to make the port 80 and 443 work on localhost but i think it worked on the public server with .xip.io
SO
I recommend:
ideally use domain name
otherwise avoid public IPs with something like .xip.io
on localhost use 127.0.0.1 instead just in case and also use .xip.io
use burner browser (or private mode if it works for you, but i wanted to test if it works in default environment) for frequent cache clearing if you mess up (and you will probably)
just use alternative ports for localhost
My issue is that I have a web server running on port 80. I want to use nginx proxy (not the ingress) bto redirect the connection. I want to use link wwww.example.com. How should I tell nginx to proxy the connection on wwww.example.com (which is a different app). I tried using service with load balancer but it changes the hostname ( to some aws link) I need it to be exactly wwww.example.com.
If I understood your request correctly, you may just use return directive in your nginx config
server {
listen 80;
server_name www.some-service.com;
return 301 $scheme://wwww.example.com$request_uri;
}
If you need something more complex check this doc or this
I'm using the below config in nginx to proxy RDP connection:
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://192.168.0.100:3389;
}
}
but the connection doesn't go through. My guess is that the problem is http in proxy_pass. Googling "Nginx RDP" didn't yield much.
Anyone knows if it's possible and if yes how?
Well actually you are right the http is the problem but not exactly that one in your code block. Lets explain it a bit:
In your nginx.conf file you have something similar to this:
http {
...
...
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So everything you write in your conf files are inside this http block/scope. But rdp is not http is a different protocol.
The only workaround I know for nginx to handle this is to work on tcp level.
So inside in your nginx.conf and outside the http block you have to declare the stream block like this:
stream {
# ...
server {
listen 80;
proxy_pass 192.168.0.100:3389;
}
}
With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing the server_name attribute you can't use it in the stream scope, plus you lose all the logging functionality that comes on the http level.
For more info on this topic check the docs
For anyone who is looking to load balance RDP connection using Nginx, here is what I did:
Configure nginx as you normally would, to reroute HTTP(S) traffic to your desired server.
On that server, install myrtille (it needs IIS and .Net 4.5) and you'll be able to RDP into your server from a browser!
I want to use 2 server blocks.
The first is:
server {
listen 443 ssl http2 fastopen=3 reuseport;
server_name a.example.xyz;
include server_safe.conf;
root /home/www/blog/;
}
The second is:
server {
listen 443 ssl http2;
server_name b.example.xyz;
include server_safe.conf;
}
What I want:
I want the server_name to be valid, that is , if I use a c.example.xyz to visit my website, both a.example.xyz,b.example.xyz,c.example.xyz is the same IP , the server should block the c.example.xyz request because it is not in the server_name.
However, if I enter https://c.example.xyz, the Nginx will still receive the request and reponse as a.example.xyz
I know HTTP/2 has no host in its header, it has a :authority instead.
My question is that : How can I reject any other request? I only want to accept the request under(with) the host(:authority) = a(b).example.xyz
The problem is the first Server block is used by default if no other name matches.
Therefore to achieve what you want, you need to create a default block, before the other two, and have it block or redirect, or show an error page.
The downsides of this are:
Unless you have HTTPS certificates that all the domain names (or use a wildcard cert that covers this), then they will get an error when going to https version of your site and use this default config. Though this would happen under your current set up anyway. There is no way AFAIK to send a block message before the HTTPS negotiation happens.
Older clients that don't support SNI (primarily Windows XP) will go to default config, whereas previously they would have gotten through for Server A as it was the default (though not for server B).
The alternative is to write a redirect rule based on the hostname provided. Not 100% sure how to do this on nginx to be honest but if not possible by default then is possible with ModSecurity. Again it will only take effect after the HTTPS negotiation has happened so still leaves you with a potential incorrect cert problem.