stunnel - two Ubuntu machines traffic encryption - encryption

I have a problem getting Stunnel to work on Ubuntu 18.04. There are tons of websites that tell how to configure it but nothing works with me, I guess I am doing something wrong.
Here are the steps I did:
OS: Ubuntu18.04 (virtual machine, clean install)
sudo apt update
sudo apt upgrade
sudo apt-get install stunnel4
Then enable auto startup by:
sudo nano /etc/default/stunnel4
Switch ENABLE=0 to ENABLE=1
Next step is create a certification file by:
sudo openssl req -new -out config.pem -keyout config.pem -nodes -x509 -days 365
The location of certification file is: /etc/stunnel/
Then create a configuration file, here is a copy for the one I created:
All set, restarting the service is the last step.
sudo /etc/init.d/stunnel4 restart
and here I got the following error :
[....] Restarting stunnel4 (via systemctl): stunnel4.serviceJob for stunnel4.service failed because the control process exited with error code.
See "systemctl status stunnel4.service" and "journalctl -xe" for details.
failed!
(I am looking to encrypt the traffic between two Ubuntu machines)
Thank you in advance.

Install stunnel on both machines i.e server and client
sudo apt-get install stunnel
Once apt-get has finished we will need to enable stunnel by editing the /etc/default/stunnel4 configuration file in both client and server.
Find:
Change to one to enable stunnel automatic startup
ENABLED=0
Replace:
Change to one to enable stunnel automatic startup
ENABLED=1
2 . Install tinyproxy on server --> This is just a proxy server in my case i used custom one.
sudo apt-get install tinyproxy
Configuring tinyproxy
By default TinyProxy starts up listening on all interfaces for a connection to port 8888. Since we don’t want to leave our proxy open to the world, let’s change this by configuring TinyProxy to listen to the localhost interface only. We can do this by modifying the Listen parameter within the /etc/tinyproxy.conf file.
Find:
#Listen 192.168.0.1
Replace With:
Listen 127.0.0.1
Once complete, we will need to restart the TinyProxy service in order for our change to take effect. We can do this using the systemctl command.
server: $ sudo systemctl restart tinyproxy
After systemctl completes, we can validate that our change is in place by checking whether port 8888 is bound correctly using the netstat command.
server: $ netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN
Create certificate using openssl on server
Easier way:
(a). openssl genrsa -out key.pem 2048
(b). openssl req -new -x509 -key key.pem -out cert.pem -days 1095
(c). cat key.pem cert.pem >> /etc/stunnel/stunnel.pem
You can opt to do (c) manually
Also remember to transfer the certificate to the client machine also...so both client and server have /etc/stunnel/stunnel.pem
Stunnel server settings
cert = stunnel.pem
[tinyproxy]
accept = 0.0.0.0:3112
connect = 127.0.0.1:8888
Stunnel Client settings
cert = stunnel.pem
client = yes
[tinyproxy]
accept = 127.0.0.1:3112
connect = 10.0.2.15:3112
Assuming your using virtualbox which has your ubuntu server installed there you have to do the following settings
In Settings>>Network change the adpater to NAT
Then in Settings>>Network>>advanced>>port fowarding add port fowarding
*Name* *Protocol* *Host IP* *Host port* *Guest IP* *Guest port*
stunnel TCP 0.0.0.0 3112 3112
Once your done restart Services
In client
sudo systemctl restart stunnel4.service
In server
sudo systemctl restart stunnel4.service
sudo systemctl restart tinyproxy
To test if it worked
In terminal:
export http_proxy="http://localhost:3112"
export https_proxy="https://localhost:3112
then:
curl --proxy-insecure -v https://www.google.com
Credit:
https://bencane.com/2017/04/15/using-stunnel-and-tinyproxy-to-hide-http-traffic/
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ssl-tunnel-using-stunnel-on-ubuntu

Related

Netplan Error: Cannot bind to port 2893, Address is already in use

I want to configure static IP address in Ubuntu.
Here you can see my configuration file for static IP addressing:
network:
version: 2
renderer: NetworkManager
ethernets:
wlp1s0:
dhcp4: no
addresses: [192.168.0.103/24]
Gateway: 192.168.0.1
Nameservers:
Addresses: [127.0.0.53]
While testing the configuration through $ sudo netplan try, I get the following response:
bind: Address already in use
netplan: fatal error: cannot bind to port 2983, is another daemon running?, exiting.
Netstat shows the port in use by Netplan.
netstat -pnlt | grep ':2983'
tcp 0 0 0.0.0.0:2983 0.0.0.0:* LISTEN 1524/netplan
So can someone please give me a way to solve this issue??
Got same problem today with one of my servers. The reason was because 2 packages exists for Ubuntu with same binary name: netplan and netplan.io. First is "Calendar Service" and the second is for networking. My server have netplan package installed. I just removed it and now netplan for networking works fine. May be it will helps to you.
Same here ...
installed debian server
apt-get update
apt-get upgrade
reboot and it broke networking so I had to go on site and W%%#$ing fix it .. DIAF 'netplan'
apt-get remove --purge netplan* -y
apt-get install netplan.io -y

Configure Docker to use a proxy server

I have installed docker on windows , when I try to run hello-world for testing on docker. I get following error
Unable to find image
My computer is using proxy server for communication. I need to configure that server in the docker. I know proxy server address and port. Where I need to update this setting. I tried using https://docs.docker.com/network/proxy/#set-the-environment-variables-manually.
It is not working.
Try setting the proxy. Right click on the docker icon in system tray, go to settings, proxy and add the below settings:
"HTTPS_PROXY=http://<username>:<password>#<host>:<port>"
If you are looking to set a proxy on Linux, see here
The answer of Alexandre Mélard at question Cannot download Docker images behind a proxy works, here is the simplified version:
Find out the systemd script or init.d script path of the docker service by running:service docker status or systemctl status docker, for example in Ubuntu16.04 it's at /lib/systemd/system/docker.service
Edit the script for example sudo vim /lib/systemd/system/docker.service by adding the following in the [Service] section:
Environment="HTTP_PROXY=http://<proxy_host>:<port>"
Environment="HTTPS_PROXY=http://<proxy_host>:<port>"
Environment="NO_PROXY=<no_proxy_host_or_ip>,<e.g.:172.10.10.10>"
Reload and restart the daemon: sudo systemctl daemon-reload && sudo systemctl restart docker or sudo service docker restart
Verify: docker info | grep -i proxy should show something like:
HTTP Proxy: http://10.10.10.10:3128
HTTPS Proxy: http://10.10.10.10:3128
This adds the proxy for docker pull, which is the problem of the question. If for running or building docker a proxy is needed, either configure ~/.docker/config as the official docs explained, or change Dockerfile so there is proxy inside the container.
I had the same problem on a windows server and solved the problem by setting the environment variable HTTP_PROXY on powershell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
And then restarting docker:
Restart-Service docker
More information at Microsoft official proxy-configuration guide.
Note: The error returned when pulling image, with version 19.03.5, was connection refused.

Site is down after VM restart Google Cloud Engine

After a restart of my VM instances my site is down. I checked the IP address, but it didn't change. Do you have any ideas about what is wrong and how to fix it?
I run WordPress (Bitnami) on a Debian-based OS. I use Cloudflare CDN. I understand that on stopping a VM it doesn't keep the settings. Can I restore them?
About your last error message
Syntax OK (98)Address already in use: AH00072: make_sock: could not
bind to address 0.0.0.0:80 no listening sockets available, shutting
down AH00015: Unable to open logs /opt/bitnami/apache2/scripts/ctl.sh
: httpd could not be started Monitored apache
I have found similar errors due to Nginx, it could happen if you have it installed and it is creating conflicts. make sure you remove it as follow:
sudo apt-get remove nginx nginx-common
sudo apt-get autoremove #to remove unneeded dependencies
Your environment doesn't loose configuration "settings", but rather the servers that use those configurations will terminate their processes when the VM shuts downs and will need to be restarted.
The problem is likely that you need to restart both your Apache web server (which starts the PHP runtime and proxies HTTP requests) and your MySQL server (which is your database)...
Restart Apache:
sudo service apache2 restart
Restart MySQL:
sudo service mysql restart
OR
sudo /etc/init.d/mysql restart
EDIT: It appears your Bitnami image has a different configuration...
Start All services: sudo /opt/bitnami/ctlscript.sh start
Restart Apache: sudo /opt/bitnami/ctlscript.sh restart apache
Restart MySQL: sudo /opt/bitnami/ctlscript.sh restart mysql

Login Failed PhpPgAdmin in nginx centos [duplicate]

This question already has answers here:
Can't login in to phpPgAdmin
(2 answers)
Closed 4 years ago.
I am facing strange problem in centos for phpPgAdmin login, I did all things that are required
in ** /usr/share/phpPgAdmin/conf/config.inc.php **
$conf['extra_login_security'] = false;
I tried with two combination of configurations
in /var/lib/pgsql/9.3/data/pg_hba.conf
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
host all all myserver_ip/32 trust
=========================2nd=====================================
# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
host all all myserver_ip/32 md5
but the Login failed is still coming
1. Configure the firewall
sudo firewall-cmd --zone=public --permanent --add-service=http
sudo firewall-cmd --zone=public --permanent --add-port=5432/tcp
sudo firewall-cmd --reload
2. configure SELinux as below:
sudo setsebool -P httpd_can_network_connect on
sudo setsebool -P httpd_can_network_connect_db on
3.configure PostgreSQL
sudo vi /var/lib/pgsql/data/pg_hba.conf
IPv4 local connections:
host all all 127.0.0.1/32 md5
IPv6 local connections:
host all all ::1/128 md5
4. Setup PostgreSQL listening addresses:
sudo vi /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
port = 5432
Then configure phpPgAdmin
sudo vi /etc/httpd/conf.d/phpPgAdmin.conf
Require all granted
Allow from all
Either of those two pg_hba.conf setups should work ok, although the first is more likely to work (whilst being completely unsecured). The one thing you seem to have left out from the problem description is if you modify the server config lines in /usr/share/phpPgAdmin/conf/config.inc.php. Those must be updated to point to your database server (and will be different depending on if it is a local or remote database).
If that doesn't do it, I'd suggest going to either the PostgreSQL slack team, or #postgresql on Freenode, for live troubleshooting help.

RStudio Server not starting

I have been working in a remote RStudio server for the last few months without any issues. Today, I restarted the rstudio server in using this command
sudo rstudio-server restart
After this, I am not able to access the server via browser. It keeps on waiting.
I checked the status using this:
sudo rstudio-server status
This resulted in:
rstudio-server stop/waiting
My server is configured to run on port 80
I have found the solution myself
The RStudio server was configured to run on port 80
Kill all the processess using port 80 and then start RStudio server
sudo fuser -k 80/tcp
sudo rstudio-server start
This solved the problem!

Resources