NGINX Remote Editing of Configurations - nginx

I'm currently running a number of servers, each running NGINX used as reverse proxies to other websites. However, if I need to change a backend IP address or change other variables within NGINX, I need to manually SSH into the server and change the configurations OR log onto NGINX Proxy Manager.
What I'm looking to do is create a central website that will enable me to edit NGINX variables such as 'proxy_pass' and send the updated value to the selected remote server, updating the NGINX config and reloading the service.
Is there any current way to do this and how could I implement that? What comes to mind is some kind of CURL request to the remote server, and then I'm not sure how I'd automatically rewrite the correct portion of NGINX config etc.
Any help would be appreciated!

If you have root access on those servers, all you need is a service or a script that will fill the new values. The simplest way I see fit is to do it with a bash script and a template for the config file.
Template config file: /home/user/nginx_config/nginx.config.sample:
-- your generic config settings
proxy_pass
location /your/location {
proxy_pass {{proxy_pass}};
}
-- rest of standard file
The bash script for filling the template: /home/user/nginx_config/generator.sh
new_ip=$1
template_path="/home/user/nginx_config/nginx.config.sample"
config_path="/etc/nginx/nginx.conf"
if [[ -z $1 ]]
then echo "Missing IP param"; exit;
fi
cp "$config_path" "${config_path}.bak"
sed "s/{{proxy_pass}}/$new_ip/g" "$template_path" > "$config_path"
echo "Done! Updated $config_path file to $1:"
cat "$config_path"
Then, all you need to do is to make a local script to connect using ssh and run the generator script (with 1.2.3.4 as your new IP address)
sshpass -p password ssh -oStrictHostKeyChecking=no -oCheckHostIP=no user#your_server "bash /home/user/nginx_config/generator.sh 1.2.3.4"

Related

Simple command-line http server for Single Page App

There are various one-liner HTTP server commands, e.g. the best-known is probably python -m http.server. I'm looking for a similar command which would run a server that ignores the file path and send all paths to a specific file, e.g. if you visit /foo or /bar, it will serve both from index.html.
And ideally relying on as little installation hassle as possible for a typical Linux/MacOS machine. (e.g. python and http.server will come out of the box to many users.)
It's the same functionality offered by the htaccess rule RewriteRule (.*) /index.html, but without needing to setup Apache. Not sure if any of those one-liner servers support something similar to it, like a command-line argument that would declare the default file for all paths.
Using php, there is a built in development server from the command line, which is super useful.
First example, in the current folder, serving only the file index.html at 127.0.0.1, port 8080:
php -S 127.0.0.1:8080 index.html
Output
PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Mon Dec 23 15:37:03 2019
Listening on http://127.0.0.1:8080
Document root is /home/nvrm
Press Ctrl-C to quit.
On this case, only the file index.html will respond at http://127.0.0.1:8080
Any http calls on this port, will be redirected to index.html.
Second example binding the whole current folder to localhost, port 5555:
php -S localhost:5555
Output:
PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Mon Dec 23 09:59:44 2019
Listening on http://localhost:5555
Document root is /home/nvrm
Press Ctrl-C to quit.
This will serve index.html at the adresss http://localhost:5555
If a file index.php exist, then it will be served first (interpreted as php)
All others files in the (sub)folder(s) are served, example http://localhost:5555/css/style.css will respond as well, if this folder and file exist of course. (Otherwise respond error 404)
Third example, to run from anywhere, pass in a path as third param. Using a local ip is also possible, by doing so, the files are available on the whole local network.
Example local ip: 192.168.1.23.
To retrieve our local ip, we can use ifconfig.
php -S 192.168.1.23:8080 ~/www
This will serve the folder www in the home folder on the port 8080: http://192.168.1.23:8080 to everyone on the network.
Obviously, we can run many servers on many different ports in parallel^
Very useful to dev, but also to quickly share files between virtual machines, devices, phones etc.
Alternatively. Listen to all interfaces by using 0.0.0.0 as ip adress. In some cases, this is the sole command that serve well across every devices in the local network.
php -S 0.0.0.0:5555
And then use the local ip as url: http://192.168.1.23:5555
To be able to close the terminal, but to keep the server running, we can use nohup:
nohup php -S localhost:8080 &
Then to kill it, quickly:
fuser -k 8080/tcp
Last example, using a hostname.
To retrieve the machine hostname from the console, the unix command is hostname.
php -S $(hostname):9999
Will bind to something like http://<session_name>-<machine_name>:9999
It is possible to install only the cli version of php to run this (~4mo). It's included in the core.
sudo apt install php-cli
For more advanced server usages, yet simple to configure, warmly, I recommend caddy server
https://github.com/svenstaro/miniserve
to serve only index.html you just do miniserve index.html. It's written in Rust so you don't need any additional dependencies.
#!/usr/bin/env node
const express = require('express');
const server = express();
server.all('/*', (_, res) => {
// You would probably not want to hard-code this,
// but make it a command line argument.
res.sendFile(__dirname + '/index.html');
});
const port = 8000;
server.listen(port, () => {
console.log('Server listening on port', port);
});
Make the file executable (chmod +x) and save it somewhere within your PATH.

module.run not executing in state.highstate, but works with state.sls

I'm attempting to re-run a state from another state. I'm not using watch or watch_in etc b/c i want it to run each time. I configure all my nginx virtual hosts and then at the end another state runs called nginx-certs the relevant portion is here:
nginx-frontend:
module.run:
- name: state.sls
- mods:
- nginx-frontend
During the highstate i see the state_id is executed but has no comments, nor shows it reruns that state, it just completes as Result: True. I can then jump to the salt master and run
sudo salt webserver state.sls nginx-certs
and when it hits nginx-frontend, it does reload all of the virtual hosts, putting the new cert in the config.
I'm curious why this does not run in the highstate.
I have attempted ll sorts of different variations of the simple block outlined above. This one works, but not in the highstate, which is my goal to fix.
If you wonder why i do it this way, all certificates for production and staging terminate at HAProxy and nginx only serves up 80/http1 81/h2, but when building out dev servers i want to assign the cert directly to the server as it will be public facing. I need to build out the virtual hosts first to get port 80 open which is used for lets-encrypt. Then when the cert is available, update the nginx vhosts listen directive and cert paths.
From what I understand: you have one server which you want temporarily configured with Nginx on port 80, then generate its certificate with letsencrypt, then change Nginx configuration to be on port 443.
What you can do is:
have one state which installs and configures Nginx to listen on port 80
have another state with installs/configures/runs letsencrypt
a third state which configures Nginx as you want it to be at the end [1]
you just include them in salt to be run in the specific order like
# custom_nginx.sls
include:
- temp_nginx_on_port_80
- letsencrypt_cert
- nginx
[1] for this I think its better to use formula like the one from the community https://github.com/saltstack-formulas/nginx-formula/ and configure it with pillar data. Obviously if you use it for step 3, you won't be able to use for step 1 (or at least I don't see right now how)

putting subversion online without http domain name

I have a local repository that resides on my computer_1. I have setup my svn server using the following command:
svnserve -d -r Path_to_Repository
computer_1 and computer_2 are connected to each other through a router and can communicate with ssh username#IP command. Considering that computer_1 does not have a registered domain name (e.g. My_Domain.com), can I create a new working copy on my computer_2? I would like to use the following command on computer_2:
svn checkout http://computer_1_IP_address A_folder_on_computer_2 -m A_log_message
However, using other protocols other than http is ok, as long as I only need to have computer_1_IP_address
You use svnserve and in this case the URL should have svn:// protocol, not http://.
You should read the documentation before beginning to configure the server!

Docker restart not showing the desired effect

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

How do I add new site/server_name in nginx?

I'm just starting to explore nginx on my ubuntu 10.04. I installed nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name.
Even when I make the changes in site-available/default. I also tried reloading/restarting nginx, but nothing works.
To build on mark's answer, Debian/Ubuntu distros default configuration file has an include /etc/nginx/sites-enabled/*; directive with site configuration file stored in /etc/nginx/sites-available/, a default site is usually included in that dir.
For examples beyond the default config, follow nginx beginner's guide or see wiki.nginx.org for more details.
After creating a new configuration in sites-available, create a symbolic link with this command, assuming that your conf file is named "myapp" and nginx is at /etc/nginx (could also be at /usr/local/etc/nginx):
ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/myapp
By the way, you could always create your conf file directly in sites-enabled but the recommended way above allows you to "enable and disable" sites on the server very quickly without actually moving/deleting your conf files.
P.S: Don't trust the tutorials: check your configuration!
P.P.S: You can use the command nginx -t to test your sites conf and nginx -s reload to reload the conf.
The usual way to add another site in Nginx in Ubuntu is to copy the sites-available/default file to sites-available/new-site-name, then create a symbolic link in sites-enabled to sites-available/new-site-name.
In the new configuration file, you need to edit the listen and server directives. Use listen to specify the IP address and port, and the server directive to specify the hostnames. For more details, see HttpCoremodule.

Resources