How can I use iptables on centos 7? [closed] - networking

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I installed CentOS 7 with minimal configuration (os + dev tools). I am trying to open 80 port for httpd service, but something wrong with my iptables service ... what's wrong with it? What am I doing wrong?
# ifconfig/sbin/service iptables save
bash: ifconfig/sbin/service: No such file or directory
# /sbin/service iptables save
The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.
# sudo service iptables status
Redirecting to /bin/systemctl status iptables.service
iptables.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
# /sbin/service iptables save
The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, status). For other actions, please try to use systemctl.
# sudo service iptables start
Redirecting to /bin/systemctl start iptables.service
Failed to issue method call: Unit iptables.service failed to load: No such file or directory.

With RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.
It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:
systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:
yum install iptables-services
Enable the service at boot-time:
systemctl enable iptables
Managing the service
systemctl [stop|start|restart] iptables
Saving your firewall rules can be done as follows:
service iptables save
or
/usr/libexec/iptables/iptables.init save

RHEL and CentOS 7 use firewall-cmd instead of iptables. You should use that kind of command:
# add ssh port as permanent opened port
firewall-cmd --zone=public --add-port=22/tcp --permanent
Then, you can reload rules to be sure that everything is ok
firewall-cmd --reload
This is better than using iptable-save, espacially if you plan to use lxc or docker containers. Launching docker services will add some rules that iptable-save command will prompt. If you save the result, you will have a lot of rules that should NOT be saved. Because docker containers can change them ip addresses at next reboot.
Firewall-cmd with permanent option is better for that.
Check "man firewall-cmd" or check the official firewalld docs to see options. There are a lot of options to check zones, configuration, how it works... man page is really complete.
I strongly recommand to not use iptables-service since Centos 7

I had the problem that rebooting wouldn't start iptables.
This fixed it:
yum install iptables-services
systemctl mask firewalld
systemctl enable iptables
systemctl enable ip6tables
systemctl stop firewalld
systemctl start iptables
systemctl start ip6tables

Try the following command iptables-save.

I modified the /etc/sysconfig/ip6tables-config file changing:
IP6TABLES_SAVE_ON_STOP="no"
To:
IP6TABLES_SAVE_ON_STOP="yes"
And this:
IP6TABLES_SAVE_ON_RESTART="no"
To:
IP6TABLES_SAVE_ON_RESTART="yes"
This seemed to save the changes I made using the iptables commands through a reboot.

Put the IPtables configuration in the traditional file and it will be loaded after boot:
/etc/sysconfig/iptables

Last month I tried to configure iptables on a LXC VM container, but every time after reboot the iptables configuration was not automatically loaded.
The only way for me to get it working was by running the following command:
yum -y install iptables-services; systemctl disable firewalld; systemctl mask firewalld; service iptables restart; service iptables save

And to add, you should also be able to do the same for ip6tables after running the systemctl mask firewalld command:
systemctl start ip6tables.service
systemctl enable ip6tables.service

If you do so, and you're using fail2ban, you will need to enable the proper filters/actions:
Put the following lines in /etc/fail2ban/jail.d/sshd.local
[ssh-iptables]
enabled = true
filter = sshd
action = iptables[name=SSH, port=ssh, protocol=tcp]
logpath = /var/log/secure
maxretry = 5
bantime = 86400
Enable and start fail2ban:
systemctl enable fail2ban
systemctl start fail2ban
Reference: http://blog.iopsl.com/fail2ban-on-centos-7-to-protect-ssh-part-ii/

Related

uwsgi nginx selinux on centos 7 audit2allow done, enforcing selinux still blocking access to the socket

I'm trying to add a flask uwsgi behind nginx with SELinux activated but no luck so far.
I've followed all suggestions to pipe denied contexts from audit.log to audit2allow generating the module and then semodule -i nginx.pp as answered in https://stackoverflow.com/a/26336047/2172543 but still, if I leave setenforce 1 nginx is being blocked to write to the socket.
I've also changed permissions to all folders in /path/to/socket.sock, changed umask of the socket to 666, did everything where there was a solution for my problem but I'm still getting 502 with setenforce 1.
Switching setenforce 0 "solves" the problem. But I want to leave SELinux activated and I have no more clues into how to further investigate the issue.
Any thoughts?
format log file
yum install setroubleshoot -y
sealert -a /var/log/audit/audit.log > /var/log/audit/audit.format.log
allow this access for now by executing
ausearch -c 'nginx' --raw | audit2allow -M my-nginx
semodule -i my-nginx.pp

How to setup Nginx as a load balancer using the StrongLoop Nginx Controller

I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.

gitlab docker ssh issue

I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here.
I have the following setting:
linux box with ubuntu server 14.04 and IP 192.168.1.104
DNS: git.mydomain.com = 192.168.1.104
A gitlab docker that I start, according to the official doc, this way:
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image
or
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image
the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080
I access git.mydomain.com without any issue, everything works.
I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config
Host git.mydomain.com
User git
Port 2222
IdentityFile /home/user/.ssh/id_rsa
If I try
ssh -p 2222 git#git.mydomain.com
the connection is closed. I assume it is because only a git-shell is permitted.
But, if I try
mkdir test
cd test
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin git#git.domain.com:user/test.git
git push -u origin master
it stucks with
Connection closed by 192.168.1.104
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I also tried with
git remote add origin git#git.domain.com:2222/user/
and the result was the same.
Note that the logs of gitlab docker include
[2015-03-06T11:04:43+00:00] INFO: group[git] created
[2015-03-06T11:04:43+00:00] INFO: user[git] created
[2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created
[2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created
Any idea how I can fix this issue?
Thanks in advance for your help.
I would guess that you have authentication problem.
Here are few things you can try:
Make sure you added you public key in gitlab.
Check permissions of you id_rsa file.
Try temporarily disabling hosts verification with
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you.
But when I log-in with ssh I get:
ssh -p 2222 git#gitlab
PTY allocation request failed on channel 0
Welcome to GitLab, tomzo!
Connection to gitlab closed.
Git remotes do not need port 2222 configured. This is OK (works for me):
$ git remote -v
origin git#gitlab:lab/workflow.git (fetch)
origin git#gitlab:lab/workflow.git (push)
And I can push and pull with git.
$ git push
Everything up-to-date

(ubuntu) nginx: [emerg] bind() to 0.0.0.0:80 failed (13: permission denied)

I need help figuring out the root cause of this permission denied error. What permissions does nginx need? Why is it so complicated?
the socket API bind() to a port less than 1024, such as 80 as your title mentioned, need root access.
here is "Bind to ports less than 1024 without root access"
and another easier way is to run nginx as root.
If you use a port bigger than 1024 with root privilege, but still got this problem, that's may be caused by SELinux:
Check this port, say 8024, in segange port
sudo semanage port -l | grep http_port_t
If 8024 doesn't exist in the port list, add it into segange port
sudo semanage port -a -t http_port_t -p tcp 8024
###update in 2017.12.22
Sometimes your SELinux is disabled, you need to enforcing it first. Check the status of SELinux by
$ sestatus
More steps can read this wonderful article: https://www.digitalocean.com/community/tutorials/an-introduction-to-selinux-on-centos-7-part-1-basic-concepts
If see this msg after run "nginx -t", you dont have premission run as root "sudo nginx -t"
nginx needs root access. Just use
sudo nginx
next step along with your password
The best solution would be:
1) add user to sudoers ( my user is prod)
usermod -aG sudo prod
2) inside circus ( process manager ) append sudo before nginx executable, mine looks like this:
[watcher:nginx]
cmd = sudo /usr/sbin/nginx
args = -c /home/t/Projects/x_b_11/etc/nginx.conf -p /home/t/Projects/x_b_11
3) and finaly add line into file /etc/sudoers ( my user is prod). This line avoids error (sudo: no tty present and no askpass program specified). Probably need to restart session ( reboot). Enjoy.
prod ALL = NOPASSWD: /usr/sbin/nginx
Ubuntu uses AppArmor and not SELinux. The responses pointing to SELinux may not be that relevant to the OP.
For the others that Googled this: I also encountered this issue on a SELinux-enabled CentOS 7 machine. nginx would not bind port 80 and gave me error 13: permission denied despite having already run
setcap 'CAP_NET_BIND_SERVICE=+ep' /usr/sbin/nginx to allow the service to bind the port with a non-root user.
Temporarily setting SELinux to Permissive (sudo setenforce Permissive) allowed nginx to start. I then ran audit2allow -a which gave me
#============= httpd_t ==============
#!!!! This avc can be allowed using the boolean 'httpd_can_network_connect'
allow httpd_t ntop_port_t:tcp_socket name_connect;
Which meant the solution was to also run:
sudo setsebool -P httpd_can_network_connect on
After which you can set SELinux back to Enforcing (sudo setenforce Enforcing) and restart everything to verify.

How to gracefully reload a spawn-fcgi script for nginx

My stack is nginx that runs python web.py fast-cgi scripts using spawn-fcgi. I am using runit to keep the process alive as a Daemon. I am using unix sockets fior the spawed-fcgi.
The below is my runit script called myserver in /etc/sv/myserver with the run file in /etc/sv/myserver/run.
exec spawn-fcgi -n -d /home/ubuntu/Servers/rtbTest/ -s /tmp/nginx9002.socket -u www-data -f /home/ubuntu/Servers/rtbTest/index.py >> /var/log/mylog.sys.log 2>&1
I need to push changes to the sripts to the production servers. I use paramiko to ssh into the box and update the index.py script.
My question is this, how do I gracefully reload the index.py using best practice to update to the new code.
Do I use:
sudo /etc/init.d/nginx reload
Do I restart the the runit script:
sudo sv start myserver
Or do I use both:
sudo /etc/init.d/nginx reload
sudo sv start myserver
Or none of the above?
Basically you have to re-start the process that's loaded your Python script. This is spawn-cgi and not nginx itself. nginx only communicates with spawn-cgi via the Unix socket and will happily re-connect if the connection is lost due to a restart of the spawn-cgi process.
Therefore I'd suggest a simple sudo sv restart myserver. No need to re-start/re-load nginx itself.

Resources