With some great help from another user on here I've managed to create a script which writes the necessary network configurations required to /etc/network/interfaces and allow public access to a DomU server.
I’ve placed this script in the /etc/rc.local file, and executed chmod u+x /etc/rc.local to enable it.
The server is a DomU Ubuntu server on the a host (Dom0). And rc.local doesn't seem to be executing before the network is brought up at boot/creation time.
So the configuration changes are being made to the /etc/network/interfaces file, but are not active once the boot process completes. I have to reboot once more before the changes take effect.
I've tried adding /etc/init.d/networking restart to the the end of the rc.local script (before exit 0), but with no joy.
I also tried adding the script to the S35networking file, but again without success.
Any advice or suggestions on getting this script to execute before the network device is brought up would be greatly appreciated.?
Related
I successfully implemented this, which blocks all internet connections on my Linux machine UNLESS it connects via a specific VPN :
https://www.comparitech.com/blog/vpn-privacy/how-to-make-a-vpn-kill-switch-in-linux-with-ufw/
If I manually execute openvpn3 session-start --config ~/Desktop/config.ovpn, it successfully connects via the VPN.
I used to have this command in a script (that has #!/bin/bash as header) which ran at device bootup without any issues, UNTIL I configured ufw for the killswitch above (now ufw runs on device bootup).
I use openvpn3 so using instructions in the above tutorial for openvpn commands didn't work at all.
I even tried using a sleep in my bash script to get it to wait a while until after bootup. Doesn't work. But if I issue the connection command manually in the command prompt, it works.
Please help! I need it to connect automatically. Much appreciated!
After spending a whole day on this, I figured out a solution. I found an article that guided me : https://www.howtogeek.com/687970/how-to-run-a-linux-program-at-startup-with-systemd/
I set up a service item using systemd (systemctl) just for that command to connect. Here is what my entry looks like :
#/etc/systemd/system/connectvpn.service
[Unit]
Description=Connect VPN
After=ufw.service network.target
Requires=ufw.service
[Service]
Type=oneshot
ExecStart=/usr/local/bin/connect
#/usr/local/bin/connect
#!/bin/bash
openvpn3 session-start --config /home/xyz/Desktop/config.ovpn
Working nicely now, connects to the VPN on bootup.
I need to run some services on boot up which I have successfully accomplished using systemd services. (Lots of answers already available).
Now, one of my service requires access to /dev/video0 while bootup when a certain user is logged in. (I am doing auto login which is working fine).
So how do I check that whether the /dev/video0 is available before starting my systemd service while bootup.
I came across something called udev for doing this, I followed this link
but I am not getting desired output as after editing /lib/udev/rules.d/99-systemd.rules files as mentioned in the link and starting my service manually it's not starting, any help is appreciated.
Finally after struggling for a day I found the answer -
I made a script in /etc/systemd/system which contains
[Unit]
Description='some description of my file write according to you'
[Service]
Type=forking
ExecStart='path to script'
[Install]
WantedBy=multi-user.target
and It executes a script which contains
#!/bin/bash
modprobe uvcvideo
Now after rebooting all the services are running properly
mod probe uvcvideo command check for running video driver and enable it at the time of bootup so that It is available for my systemd process
Thanks
How can I stop the http server, downloaded using 'npm install http-server" comand in terminal (console) and launched then?
Simply Ctrl+C, if you read the output after you launch it, you should see:
Starting up http-server, serving xxx
Available on:
http://<some ip>:<some port>
Hit CTRL-C to stop the server
Its built on node so Kill the node process for stopping it if it stuck. You can find all the node process ids and see what I'd your server have and kill that.
I'm doing an internship focused on Docker and I have to load-balance an application which have a client, a server and a database. I use Nginx as a load-balancer and my goal is to dynamically scale the number of server containers according their CPU usage. For instance if the CPU usage is over 60% I want to add a new container on the fly without restarting Nginx to divide the CPU usage.
I have to modify the nginx.conf file to add a new container but I have to restart the Nginx container to apply the changes, which is very slow.
So my question is : is there a (free) way to do it dynamically ?
Tell me if you want further information and forgive my poor english.
Thanks.
EDIT : I did as #Konstantin Azizov told me :
docker cp ./new.conf $(docker ps -f "name=dockerizedrubis_nginx" -q ):/etc/nginx/nginx.conf
docker exec $(docker ps -f "name=dockerizedrubis_nginx" -q) bash -c 'kill -HUP $(cat /run/nginx.pid)'
docker exec $(docker ps -f "name=dockerizedrubis_nginx" -q) bash -c '/etc/init.d/nginx reload'
The configuration file is well pasted in the container supporting Nginx, I send the HUP signal to reconfigure the Nginx process et then I reload to apply my changes. There are no errors and the reload on-the-fly works fine but my new nodes are not taken into account by Nginx, the requests are still only directed to the first node created ...
EDIT 2 : I found the origin of the problem. It seems like in order to update the /etc/hosts of a container after a 'docker-compose scale', this container needs to be stopped, removed and restarted. In my case, I really don't want to stop the container supporting Nginx.
Question : Anyone has an idea of how to update /etc/hosts of a container after a re-scale without having to restart the container (beside a dirty script) ?
Thanks.
I used the nginx-proxy image from Json Wilder for a while to load balance between containers, and it works for more than one scalable service. It monitors the docker daemon and if an event happens it rebuilds the nginx config file adding the new container instances when you are scaling out or removing it if you are scaling in.
Since Docker 1.10 (not sure if this is the correct version) there is a internal DNS embedded into Docker daemon, so since then I am using the round robin feature from it. Now I am using the oficial nginx image to proxy pass the requests to the a domain that I define as alias into network options. I do not know if I was clear due to my poor english but I believe my Github example may help.
Unfortunately there is no easy(free) way to change configuration without restarting, the only way to achieve zero-downtime scaling it's graceful restart, when you restarting Nginx gracefully it will spawn new instance with new configuration wait until it boots up and then kill old instance with the previous configuration.
See official guide.
I am currently trying to setup a docker container using ubuntu:14.04 as my base image, with nginx and gunicorn/django/celery running inside. I am using supervisor to start all of the processes, and have tested to make sure gunicorn is relaunched when it goes down. However, I can't figure out how to do it with nginx.
My supervisord.conf for nginx is as follows:
[program:nginx]
command=nginx
autorestart=false
I have autorestart set to false because, from what I can tell, the nginx command simply starts the master process and worker processes, and then exits with status code 0. If I have autorestart set to true, it simply keeps trying to restart that nginx command, which will fail for subsequent retries because the master/worker processes are already running and bound to the port.
On the surface, this seems okay, because if I try and kill a worker process, the master will start another worker to take it's place. But how do I ensure the master process stays running as well?
You need to append daemon off; to your nginx.conf configuration instructing nginx to run in the foreground.
Then modify your supervisor stanza to be:
[program:nginx]
command=nginx
autorestart=true
It will still spawn master/worker processes/subprocesses and can be used this way in production setups just fine. In this case it's supervisor that runs the process in the background and controls and supervises it.
See this FAQ entry