I have NGINX running as a service on a Windows machine. I need to update certificate file(s) periodically but I'm not sure how NGINX handles certificate.
Does it read the file for each connection, in which case I can try to overwrite the file, OR does it chache a certificate when started, in which case I need gracefully stop NGINX and restart?
I.e., what is the safe way to replace the certificate file and tell NGINX to start using it?
Related
I'm working on a small Flask based service behind Nginx that serves some content over HTTP. It will do so using two-way certificate authentication - which I understand how to do with Nginx - but users must log in and load their own certificate that will be used for the auth piece.
So the scenario is:
User has a server that generates a cert that is used for client authentication.
They log into the service to upload that cert for their server.
Systems that pull the cert from the user's server can now reach an endpoint on my service that serves the content and authenticates using the cert.
I can't find anything in the Nginx docs that says I can have a single keystore or directory that Nginx looks at to match the cert for an incoming request. I know I can configure this 'per-server' in Nginx.
The idea I currently have is that I allow the web app to trigger a script that reads the Nginx conf file, inserts a new server entry and a specified port with the path to the uploaded cert and the sends the HUP signal to reload Nginx.
I'm wondering if anyone in the community has done something similar to this before with Nginx or if they have a better solution for the odd scenario I'm presenting.
After a lot more research and reading some of the documentation on nginx.com I found that I was way over complicating this.
Instead of modifying my configuration in sites-available I should be adding and removing config files from /etc/nginx/conf.d/ and then telling Nginx to reload by calling sudo nginx -s reload.
I'll have the app call a script to run the needed commands and add the script into the sudoers file for the www-data user.
Recently, I configured a Nexus repo on a corporate server at https://mycorporateserver.corporation.com/nexus/.
The way "its always been done" is to put our "apps" on the server and use apache httpd to serve the pages and manage access using ldap.
Nexus is configured for anonymous access, https, localhost only (all works fine). Then, we used Apache httpd to serve that Nexus page/URI to others using proxypass and reverseproxypass (per instructions in sonatype's documentation).
The catch is that the httpd configuration requires ldap. So, if I hit the given Nexus URI from a web browser, the browser asks for my corporate login. I log in with my user name and password and can view the repository as an anonymous user just fine.
I did not configure Nexus for ldap, Nexus provided me read-only anonymous access combined with the ability to log in as an admin from the login menu.
Great. The problem (not surprising) is when Eclipse/M2E tries to contact the Nexus repository I get:
"could not transfer artifact 'myartifact' from/to nexus (https://mycorporateserver.corporation.com/nexus/): handshake alert."
In my settings.xml, I included
<servers>
<server>
<id>tried many different versions of the server name including full URI</id>
<username>username</username>
<password>password</password>
<server/>
<servers/>
but that doesn't seem to work - which I think makes sense since I'm not trying to login to Nexus but rather supply my credentials to ldap.(?)
In M2E/Eclipse, is there a way to provide the needed LDAP information?
Is it better to not let httpd manage access but configure Nexus to handle everything LDAP? Is there a better/different way to configure Nexus/httpd/LDAP/Eclipse to solve the problem?
Thanks for all pointers and guidance!
"could not transfer artifact 'myartifact' from/to nexus
(https://mycorporateserver.corporation.com/nexus/): handshake alert."
That's an SSL handshake problem, the Java running Eclipse does not consider the certificate installed on Nexus to be valid. This is almost certainly because either:
The certificate is self signed.
The certificate has been signed by a
private certificate signing authority which is not in the truststore
of the Java running Eclipse.
Either way the workaround is to install the certificate on Nexus into the trust store of the java running Nexus.
See here for more information:
https://support.sonatype.com/hc/en-us/articles/213464948-How-to-trust-the-SSL-certificate-issued-by-the-HTTP-proxy-server-in-Nexus
Ultimately, as I understand it, it was a mismatch between how the VirtualHost and ServerName were defined in the apache httpd configuration.
https://mycorporateserver.corporation.com/nexus/ was the ServerName but the VirtualHost was defined with the ip and port https://mycorporateserver.corporation.com:port.
Original
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com/nexus/
...ldap and proxy pass configs
</VirtualHost>
Since we have more than one virtual host containing this ip and port combination, the server looked further into the configuration to find the proper page by reading the ServerName. Since no ServerNames matched what the clients sent, the handshake error occurred.
https://httpd.apache.org/docs/current/vhosts/name-based.html
Changing ServerName in the httpd conf to include the port solved the handshake error.
Final
<VirtualHost ip:port>
ServerName mycorporateserver.corporation.com:port/nexus/
...ldap and proxy pass configs
</VirtualHost>
(I'm by no means an apache httpd expert, still want to find out if there is a way to do all this without showing the port in the URL)
Then, when sending a request from Eclipse/M2E to the server, the response was "Unauthorized"
Adding the nexus server plus username and password to settings.xml solved the authorization problem and all worked great!
<servers>
<server>
<id>nexus</id>
<username>username</username>
<password>password</password>
<server>
</servers>
To ensure passwords were not stored in plain text, instructions at this Maven site were used to create encrypted passwords: https://maven.apache.org/guides/mini/guide-encryption.html
In hindsight, the question probably could have been asked better/differently but I didn't yet know what I learned today.
I have nginx that I am using to receive traffic for multiple domains on port 80 each with upstream to different application servers on application specific ports
e.g
abc.com:80 --> :3345
xyz.com:80 --> :3346
Is it possible to
1. add/delete domains (abc/xyz) without downtime
2. change application level port mapping (3345,3346) without downtime
If nginx can't do it, is there any other service that can do it without restarting the service and incurring downtime ?
Thanks in advance
In short: Yes.
Typically, you'd overwrite the existing config file(s) in place while nginx is running, test it using nginx -t and once everything is fine, reload nginx using nginx -s reload. This will cause nginx to spawn new worker processes which use your new config while old worker processes are being shut down gracefully.. Graceful means closing listen sockets while still serving currently active connections. Every new request/connection will use the new config.
Note that in case nginx is not able to parse the new config file(s), the old config will stay in place.
Is there any way I can configure ngnix other than through the normal ngnix.conf file ?
Like xml configuration or memcache or any other ways..?
My objective is to add/remove upstreams to the configuration dynamically. Ngnix doesnt seem to have a direct solution for this so I was planning to play with the configuration file, but I am finding it very difficult and error prone to modify the file through script/programs.
Any suggestions ?
No. You can't. The only way to "dynamically" reconfigure nginx is to process the config files in external software and then reload the server. Neither you can "program" config like in Apache. The nginx config is mostly a static thing which is praised for its performance.
Source: I needed it too, done some research.
Edit: I have a "supervising" tool installed on my hosts that monitors load and clusters and such. I've ended up implementing the upstreams scaling through it. Whenever a new upstream is ready, it notifies my "supervisor" on all web servers. The "supervisors" then query for served "virtual hosts" on the new upstream and add all of them to their context on the nginx host. then it just nginx -t && nginx -s reload everything. This is for nginx fastcgiing to php-fpms.
Edit2: I have many server blocks for different server_names (sites), each has an upstream associated to it on another host(s). In the server block I have include /path/to/where/my/upstream/configs/are/us-<unique_site_id>.conf line. the us-<unique_site_id>.conf is generated when the server block is created and populated with existing upstream(s) information. When there are changes in the upstreams pool or the site configuration, the file is rewritten to reflect it.
I have asp.net site on my local machine.
IIS configuration:
binding: https binding with self-signed certificate,
ssl settings: Require SSL and Require client certificates
I have installed next certificates on my machine:
CA certificate (call it 'CA Center') in Trusted Root Certification Authorities store.
Client certificate issued by 'CA Center' in Personal store
I go to site and accept server certificate. But next i get error:
HTTP Error 403.7 - Forbidden. The page you are attempting to access requires your browser to have a Secure Sockets Layer (SSL) client certificate that the Web server recognizes.
That means browser (IE) doesn't send applicable client certificates to server.
What's wrong? Should I configure something else?
I had exactly this problem, and it took me an age to figure out the cause. Turned out it was because my computer was part of a domain, and there was some sort of group policy for that domain was restricting the trusted root certificates that IIS would be willing to accept. I don't know exactly what the setting was or how to alter it, but I found I could work around it by choosing to install my certificate into the enterprise physical store using the certutil command:
certutil -addstore -v -enterprise root CertificateAuthority.cer
It sounds like the browser never prompted you to select a client certificate to send which means something is incorrect with the SSL Handshake. Try testing this with OpenSSL.
Additionally, a very common problem is having too many certificates in the Trusted Root CA folder. When the server sends the list of CAs, there is a limit to how large the list can be so if it exceeds the limit, it will truncate the remaining CA certificates. Make sure the Trusted Root CA folder doesn't have too many certificates. One way to check this is temporarily modifying the SCHANNEL in the registry editor to not send the CA List, and then re-try.
Start > Run > 'regedit' > HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL >
right-click > New > DWORD > 'SendTrustedIssuerList' > Value:0
Re-install the certificates and check their effective dates. From Microsoft Support:
Download the root server certificate in a browser on the server
computer. Run the Iisca.exe command line utility that is located in
the Inetsrv directory.
Check the effective date on the client certificate and make sure that
the date and time has arrived.
Check the expiration date and make sure that the certificate has not
expired. Contact your certificate authority to see if your
certificate has expired.