Any way to force ufw to show the status both verbose and numbered? - ufw

I can get a non verbose ufw output with numbers (convenient to edit / delete rules based on their number):
pi#raspberrypi:~ $ sudo ufw status numbered verbose
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN Anywhere
[ 2] 22/tcp (v6) ALLOW IN Anywhere (v6)
but I cannot get a verbose (to also display the defaults) output numbered:
pi#raspberrypi:~ $ sudo ufw status verbose numbered
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
Any idea how to get the ufw verbose output, with numbers on the rules (so that I can edit / delete them etc, while being aware of the defaults at the same time?)?

afaik there is no built-in way to display both. But you can just pipe your verbose output to awk to get both, like:
me:~$ sudo ufw status verbose | awk '{if ($1 !~ /^[0-9]+/) {print $0} else {print (i++)+1 "\t" $0} }'
Result:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
1 80,443/tcp (Nginx Full) ALLOW IN Anywhere
2 ****/tcp ALLOW IN Anywhere
3 80,443/tcp (Nginx Full (v6)) ALLOW IN Anywhere (v6)
4 ****/tcp (v6) ALLOW IN Anywhere (v6)

Related

Serving rtmp on port 1935

I've been trying to get ffmpeg to stream in rtmp but connection to port 1935 is always refused. I really don't know what else I can do to allow this connection.
Here is what specs I'm running.
Ubuntu 18.04 (tried with 19.04) however same issue - here is why I think I've made a mistake
No Nginx installation at the moment
FFMPEG "ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)"
This is the script I run:
ffmpeg -i "test.mp4" -c:v copy -c:a copy -f flv "rtmp://127.0.0.1/stream/test"
Error I get is:
[tcp # 0x55ff05ab8ce0] Connection to tcp://127.0.0.1:1935 failed: Connection refused
I've done some research and been across many posts about ffserver.conf and I have made those changes but still no luck. Here is my config file. I also have ran ffserver once using this config.
HTTPPort 8090
HTTPBindAddress 127.0.0.1
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 200K
# Only allow connections from localhost to the feed.
ACL allow 127.0.0.1
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Feed>
<Stream test1.mpg>
# coming from live feed 'feed1'
Feed feed1.ffm
Format mpeg
AudioBitRate 32
# Number of audio channels: 1 = mono, 2 = stereo
AudioChannels 2
AudioSampleRate 44100
# Bitrate for the video stream
VideoBitRate 64
# Ratecontrol buffer size
VideoBufferSize 40
# Number of frames per second
VideoFrameRate 3
</Stream>
<Stream test.asf>
Feed feed1.ffm
Format asf
VideoFrameRate 15
VideoSize 352x240
VideoBitRate 256
VideoBufferSize 40
VideoGopSize 30
AudioBitRate 64
StartSendOnKey
</Stream>
# Special streams
# Server status
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 127.0.0.1
ACL allow 192.168.0.0 192.168.255.255
#FaviconURL http://pond1.gladstonefamily.net:8080/favicon.ico
</Stream>
<Redirect index.html>
URL http://www.ffmpeg.org/
</Redirect>
Here is my ufw status:
-- ------ ----
22/tcp ALLOW Anywhere
22 ALLOW Anywhere
1935/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
1935/tcp (v6) ALLOW Anywhere (v6)
but still nothing, I've also opened ports in iptables but no luck. Here is how this is done:
iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 1935 -j ACCEPT
and
iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --dport 1935 -j ACCEPT
and still nothing, every time I run ffmpeg I get connection refused. I have previously installed nginx just to test but no luck.
What am I doing wrong here? Isn't this port suppose to be open now?
Thanks
#JJ-the-Second, I have been using nginx rtmp module on ubuntu natively and it is working completely fine. But instead of sending stream to 127.0.0.1, I either send it to localhost or 0.0.0.0
I figured it out, I was using Nginx RTMP module - Nginx RTMP for some reason doesn't work well on Ubuntu but fine with Alpine 3.8 - As soon as I started a nginx rtmp docker container and exposed 1935 and 80 everything started working fine. Listen learnt, never install nginx rtmp module on ubuntu.

ufw deny rule seems to be ignored

I have UFW setup as my firewall and a script that reads my log file to detect spammers (it's a mailserver). The script insert rules like these on the first line:
Anywhere DENY x.x.x.x
The script is running fine, rules are added. You would say everything is working fine, but there are still logs coming from IPs that should be blocked.
I have tried reloading UFW, but this does not solve this issue. These are basically my rules:
Anywhere DENY x.x.x.x
25/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
I assume the firewall stops when the rule is valid?
Yes, once a rule is matched the others will not be evaluated.
You can check the order of your rules and that ufw is active with:
sudo ufw status numbered

Vagrant ssh to Docker container

I run Drupal as a Docker container in the Vagrant box boot2docker (on Windows 8.1):
Vagrantfile (my Docker container)
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
end
config.vm.synced_folder ".", "/vagrant", type: "smb", disabled: true
end
host/Vagrantfile (host)
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.hostname = "docker-host"
config.vm.box = "hashicorp/boot2docker"
config.vm.network "forwarded_port", guest: 80, host: 8080
end
I simply call vagrant up in the directory of my Docker container to run it (and the host):
$ vagrant up
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
default: Vagrant will now create or start a local VM to act as the Docker
default: host. You'll see the output of the `vagrant up` for this VM below.
default:
default: Importing base box 'hashicorp/boot2docker'...
default: Matching MAC address for NAT networking...
default: Checking if box 'hashicorp/boot2docker' is up to date...
default: Setting the name of the VM: boot2docker_default_1463064065066_29287
default: Clearing any previously set network interfaces...
default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Forwarding ports...
default: 2375 (guest) => 2375 (host) (adapter 1)
default: 80 (guest) => 8080 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
default: Running 'pre-boot' VM customizations...
default: Booting VM...
default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: docker
default: SSH auth method: password
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Machine booted and ready!
GuestAdditions versions on your host (5.0.20) and guest (4.3.28 r100309) do not match.
The guest's platform ("tinycore") is currently not supported, will try generic Linux method...
Copy iso file C:\Program Files/Oracle/VirtualBox/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.0.20 - guest version is 4.3.28 r100309
mkdir: can't create directory '/tmp/selfgz98932600': No such file or directory
Cannot create target directory /tmp/selfgz98932600
You should try option --target OtherDirectory
An error occurred during installation of VirtualBox Guest Additions 5.0.20. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
default: Setting hostname...
==> default: Warning: When using a remote Docker host, forwarded ports will NOT be
==> default: immediately available on your machine. They will still be forwarded on
==> default: the remote machine, however, so if you have a way to access the remote
==> default: machine, then you should be able to access those ports there. This is
==> default: not an error, it is only an informational message.
==> default: Creating the container...
default: Name: drupal-container
default: Image: drupal
default: Port: 80:80
default:
default: Container created: d12499a52f3d3f27
==> default: Starting container...
==> default: Provisioners will not be run since container doesn't support SSH.
Now I like to connect to the container from the same directory:
$ vagrant ssh
==> default: SSH will be proxied through the Docker virtual machine since we're
==> default: not running Docker natively. This is just a notice, and not an error.
==> default: The machine you're attempting to SSH into is configured to use
==> default: password-based authentication. Vagrant can't script entering the
==> default: password for you. If you're prompted for a password, please enter
==> default: the same password you have configured in the Vagrantfile.
docker#127.0.0.1's password: tcuser
ssh: connect to host 172.17.0.1 port 22: Connection refused
Connection to 127.0.0.1 closed.
Why is the connection refused? Wrong password?
I also tried it using the hash of the environment. I determined the hash by
$ vagrant global-status
id name provider state directory
----------------------------------------------------------------------------------------------
e4da5ae default virtualbox running c:/my-project-path/host/boot2docker
98ef037 default docker running c:/my-project-path
The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date. To interact with any of the machines, you can go to
that directory and run Vagrant, or you can use the ID directly
with Vagrant commands from any directory. For example:
"vagrant destroy 1a2b3c4d"
and tried to connect (same error):
$ vagrant ssh 98ef037
==> default: SSH will be proxied through the Docker virtual machine since we're
==> default: not running Docker natively. This is just a notice, and not an error.
==> default: The machine you're attempting to SSH into is configured to use
==> default: password-based authentication. Vagrant can't script entering the
==> default: password for you. If you're prompted for a password, please enter
==> default: the same password you have configured in the Vagrantfile.
docker#127.0.0.1's password: tcuser
ssh: connect to host 172.17.0.3 port 22: Connection refused
Connection to 127.0.0.1 closed.
If I add docker.has_ssh = true to Vagrantfile (I'm confused whether I need it, since I can call vagrant ssh without it):
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.provider "docker" do |docker|
docker.vagrant_vagrantfile = "host/Vagrantfile"
docker.image = "drupal"
docker.ports = ['80:80']
docker.name = 'drupal-container'
docker.has_ssh = true
end
config.vm.synced_folder ".", "/vagrant", type: "smb", disabled: true
end
then I cann't run/reload my container, because it waits indefinitely for the machine to boot:
$ vagrant reload
==> default: Stopping container...
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.
If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.
If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.
So how can I connect to my Docker container using vagrant ssh?
Note: I can connect to the host first and then call docker on it
$ cd host
$ vagrant ssh
==> default: The machine you're attempting to SSH into is configured to use
==> default: password-based authentication. Vagrant can't script entering the
==> default: password for you. If you're prompted for a password, please enter
==> default: the same password you have configured in the Vagrantfile.
docker#127.0.0.1's password: tcuser
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.7.0, build master : 7960f90 - Thu Jun 18 18:31:45 UTC 2015
Docker version 1.7.0, build 0baf609
docker#docker-host:~$ sudo docker exec -t -i drupal-container /bin/bash
root#d12499a52f3d:/var/www/html#
but this is an unpractical workaround. I simply like to call vagrant ssh in the directory of the container to connect to the container (common Vagrant workflow).
You run a docker container not a VM as your provider, so the vagrant workflow is broken as you cannot ssh into docker container.
When you're using the docker provider you can run your command using vagrant docker-run; see the doc
If you were running docker directly, you will not be able to ssh directly, there's some hacks to workaround this but vagrant as an abstraction of the provider cannot make it as a simple ssh command and its not official support.

How to change default http port in openstack dashboard?

I am new to OpenStack and I need to change default http port for dashboard(horizon) which is currently set to 80. I've installed/deployed OpenStack using devstack script.
Which configuration files do I need to touch and change?
Obviously, changing only /etc/apache2/sites-available/horizon.conf won't do the trick...
Well, just poor me... it was only a matter of Apache Virtualhost configuration. I added another Listen directive into ports.conf file.
Sorry for posting this stupid question.
If you only edit horizon.conf, the change won't last unstack && cleanup && stack.
To make it persistent, edit /your/devstack/location/files/apache-horizon.template adding the appropriate Listen directive.
However, you still need to change the Apache listen port, as it listens on 80 anyway.
Centos 7.4, OpenStack Pike & Queens instruction
Change Puppets module ports file config /etc/httpd/conf/ports.conf:
change line Listen 80 to Listen 8888
Change default host port /etc/httpd/conf.d/15-default.conf:
change line <VirtualHost *:80> to <VirtualHost *:8888>
Change Horizon host port /etc/httpd/conf.d/15-horizon_vhost.conf:
change line <VirtualHost *:80> to <VirtualHost *:8888>
Restart http server:
$ systemctl restart httpd.service
Modify iptables:
List the iptables rules with line numbers and remember one with Horizon (11 in my case)
$ iptables -L -n --line-numbers
[...]
11 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80 /* 001 horizon 80 incoming */
[...]
Insert the new rule at 11
$ iptables -I INPUT 11 -p tcp -m multiport --dports 8888 -j ACCEPT -m comment --comment "001 horizon 8888 incoming"
$ service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
Remove the old rule (11+1=12, check it: $ iptables -L -n --line-numbers)
$ iptables -D INPUT 12
$ service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

Do I need to add a forwarded port for livereload when using a vagrant box?

I'm trying to use the livereload browser extension with a vagrant box provisioned using puphpet.
I think port 35729 is blocked as I can't telnet to that port from the host OS (OSX). Guest OS is Ubuntu 14.04.
Would adding an IPTables rule suffice or do I need to add a new forwarded port and re-provision the box?
iptables -L
target prot opt source destination
ACCEPT icmp -- anywhere anywhere /* 000 accept all icmp */
ACCEPT all -- anywhere anywhere /* 001 accept all to lo interface */
ACCEPT all -- anywhere anywhere /* 002 accept related established rules */ state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere multiport ports 1025,socks /* 100 tcp/1025, 1080 */
ACCEPT tcp -- anywhere anywhere multiport ports ssh /* 100 tcp/22 */
ACCEPT tcp -- anywhere anywhere multiport ports https /* 100 tcp/443 */
ACCEPT tcp -- anywhere anywhere multiport ports http /* 100 tcp/80 */
DROP all -- anywhere anywhere /* 999 drop all */
I tried adding the following:
sudo iptables -A OUTPUT -p tcp -m tcp --dport 35729 -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --sport 35729 -j ACCEPT
But this didn't resolve the problem. I also tried adding this to config.yml and running vagrant provision:
forwarded_port:
l1J8zgpT2xBX:
host: '35729'
guest: '35729'
Adding the ports under the forwarded_port should be enough, as I've written code into PuPHPet to add those to the OS firewall:
if has_key($vm_values, 'vm')
and has_key($vm_values['vm'], 'network')
and has_key($vm_values['vm']['network'], 'forwarded_port')
{
create_resources( iptables_port, $vm_values['vm']['network']['forwarded_port'] )
}
define iptables_port (
$host,
$guest,
) {
if ! defined(Firewall["100 tcp/${guest}"]) {
firewall { "100 tcp/${guest}":
port => $guest,
proto => tcp,
action => 'accept',
}
}
}
However, you must run $ vagrant reload, not $ vagrant provision. Reload affects the box itself - memory, cpus, ports shared, etc. Provision will affect whatever provisioning script you've set up (in this case Puppet).

Resources