Cert/client authentication with Nginx with multiple clients using different certs - nginx

I'm working on a small Flask based service behind Nginx that serves some content over HTTP. It will do so using two-way certificate authentication - which I understand how to do with Nginx - but users must log in and load their own certificate that will be used for the auth piece.
So the scenario is:
User has a server that generates a cert that is used for client authentication.
They log into the service to upload that cert for their server.
Systems that pull the cert from the user's server can now reach an endpoint on my service that serves the content and authenticates using the cert.
I can't find anything in the Nginx docs that says I can have a single keystore or directory that Nginx looks at to match the cert for an incoming request. I know I can configure this 'per-server' in Nginx.
The idea I currently have is that I allow the web app to trigger a script that reads the Nginx conf file, inserts a new server entry and a specified port with the path to the uploaded cert and the sends the HUP signal to reload Nginx.
I'm wondering if anyone in the community has done something similar to this before with Nginx or if they have a better solution for the odd scenario I'm presenting.

After a lot more research and reading some of the documentation on nginx.com I found that I was way over complicating this.
Instead of modifying my configuration in sites-available I should be adding and removing config files from /etc/nginx/conf.d/ and then telling Nginx to reload by calling sudo nginx -s reload.
I'll have the app call a script to run the needed commands and add the script into the sudoers file for the www-data user.

Related

NGINX certificate file update

I have NGINX running as a service on a Windows machine. I need to update certificate file(s) periodically but I'm not sure how NGINX handles certificate.
Does it read the file for each connection, in which case I can try to overwrite the file, OR does it chache a certificate when started, in which case I need gracefully stop NGINX and restart?
I.e., what is the safe way to replace the certificate file and tell NGINX to start using it?

Add nginx config on the fly

I am building a multi-tenant application where requests to multiple domains have to be serviced by the same nginx server.
In order to achieve this, a script creates nginx configs for each domain after a registration process and adds them into a folder. The base nginx configuration has been setup to read configs from this folder.
If I manually restart nginx using sudo service nginx restart the application works fine. However, I am looking for this to happen without a manual intervention. i.e. I want my script to refresh nginx config and I want to do it without entering a sudo password again.
Can someone help me achieve this?
I would strongly discourage using service ngnix restartto reload configs, especially in a multi-tenant environment. You risk interrupting ongoing requests, sessions, etc. That's potentially fine, but each tenant had to make that determination and has to do so at appropriate times. Nginx supports the command service ngnix reload to address this concern. Reload allows for configs to be reloaded without any downtime.
You could trigger the command at least 3 ways:
Periodic cron job (easiest to setup, least efficient)
Manually triggering the command
Trigger through filesystem monitoring
Option 2 would be good if, for example, you had some web interface that allows a tenant to modify a config and you know to manually trigger the command or to send a message to some other service that triggers it. You could avoid using sudo securely by granting the web application the ability to run a single command as root e.g. vi sudo and add the line www-data ALL=(ALL) NOPASSWD: /usr/sbin/service nginx reload where www-data should be whatever user your application runs under. Then you can just execute the shell command according to whatever api is appropriate for the language you are using.
Option 3 would be the most robust. There all several options for monitoring the filesystem but I would recommend incron. Here's a guide to install and configure incron. You could monitor changes to whichever directory you store configs in and use service nginx reload in place of the example command in the tutorial.

Accessing Nexus from Eclipse/M2E through httpd with LDAP requirement

Recently, I configured a Nexus repo on a corporate server at https://mycorporateserver.corporation.com/nexus/.
The way "its always been done" is to put our "apps" on the server and use apache httpd to serve the pages and manage access using ldap.
Nexus is configured for anonymous access, https, localhost only (all works fine). Then, we used Apache httpd to serve that Nexus page/URI to others using proxypass and reverseproxypass (per instructions in sonatype's documentation).
The catch is that the httpd configuration requires ldap. So, if I hit the given Nexus URI from a web browser, the browser asks for my corporate login. I log in with my user name and password and can view the repository as an anonymous user just fine.
I did not configure Nexus for ldap, Nexus provided me read-only anonymous access combined with the ability to log in as an admin from the login menu.
Great. The problem (not surprising) is when Eclipse/M2E tries to contact the Nexus repository I get:
"could not transfer artifact 'myartifact' from/to nexus (https://mycorporateserver.corporation.com/nexus/): handshake alert."
In my settings.xml, I included
<servers>
<server>
<id>tried many different versions of the server name including full URI</id>
<username>username</username>
<password>password</password>
<server/>
<servers/>
but that doesn't seem to work - which I think makes sense since I'm not trying to login to Nexus but rather supply my credentials to ldap.(?)
In M2E/Eclipse, is there a way to provide the needed LDAP information?
Is it better to not let httpd manage access but configure Nexus to handle everything LDAP? Is there a better/different way to configure Nexus/httpd/LDAP/Eclipse to solve the problem?
Thanks for all pointers and guidance!
"could not transfer artifact 'myartifact' from/to nexus
(https://mycorporateserver.corporation.com/nexus/): handshake alert."
That's an SSL handshake problem, the Java running Eclipse does not consider the certificate installed on Nexus to be valid. This is almost certainly because either:
The certificate is self signed.
The certificate has been signed by a
private certificate signing authority which is not in the truststore
of the Java running Eclipse.
Either way the workaround is to install the certificate on Nexus into the trust store of the java running Nexus.
See here for more information:
https://support.sonatype.com/hc/en-us/articles/213464948-How-to-trust-the-SSL-certificate-issued-by-the-HTTP-proxy-server-in-Nexus
Ultimately, as I understand it, it was a mismatch between how the VirtualHost and ServerName were defined in the apache httpd configuration.
https://mycorporateserver.corporation.com/nexus/ was the ServerName but the VirtualHost was defined with the ip and port https://mycorporateserver.corporation.com:port.
Original
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com/nexus/
...ldap and proxy pass configs
</VirtualHost>
Since we have more than one virtual host containing this ip and port combination, the server looked further into the configuration to find the proper page by reading the ServerName. Since no ServerNames matched what the clients sent, the handshake error occurred.
https://httpd.apache.org/docs/current/vhosts/name-based.html
Changing ServerName in the httpd conf to include the port solved the handshake error.
Final
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com:port/nexus/
...ldap and proxy pass configs
</VirtualHost>
(I'm by no means an apache httpd expert, still want to find out if there is a way to do all this without showing the port in the URL)
Then, when sending a request from Eclipse/M2E to the server, the response was "Unauthorized"
Adding the nexus server plus username and password to settings.xml solved the authorization problem and all worked great!
<servers>
<server>
<id>nexus</id>
<username>username</username>
<password>password</password>
<server>
</servers>
To ensure passwords were not stored in plain text, instructions at this Maven site were used to create encrypted passwords: https://maven.apache.org/guides/mini/guide-encryption.html
In hindsight, the question probably could have been asked better/differently but I didn't yet know what I learned today.

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

can I configure ngnix anyways other than through the normal ngnix.conf file

Is there any way I can configure ngnix other than through the normal ngnix.conf file ?
Like xml configuration or memcache or any other ways..?
My objective is to add/remove upstreams to the configuration dynamically. Ngnix doesnt seem to have a direct solution for this so I was planning to play with the configuration file, but I am finding it very difficult and error prone to modify the file through script/programs.
Any suggestions ?
No. You can't. The only way to "dynamically" reconfigure nginx is to process the config files in external software and then reload the server. Neither you can "program" config like in Apache. The nginx config is mostly a static thing which is praised for its performance.
Source: I needed it too, done some research.
Edit: I have a "supervising" tool installed on my hosts that monitors load and clusters and such. I've ended up implementing the upstreams scaling through it. Whenever a new upstream is ready, it notifies my "supervisor" on all web servers. The "supervisors" then query for served "virtual hosts" on the new upstream and add all of them to their context on the nginx host. then it just nginx -t && nginx -s reload everything. This is for nginx fastcgiing to php-fpms.
Edit2: I have many server blocks for different server_names (sites), each has an upstream associated to it on another host(s). In the server block I have include /path/to/where/my/upstream/configs/are/us-<unique_site_id>.conf line. the us-<unique_site_id>.conf is generated when the server block is created and populated with existing upstream(s) information. When there are changes in the upstreams pool or the site configuration, the file is rewritten to reflect it.

Resources