Freeradius extra open port - networking

I have server with available many subnets, I would like to my Freeradius only listen on specific IP addresses. I use freeradius configuration from Arch package freeradius-3.0.19-3. The only changes are:
removed IPv6 listen sections
in IPv4 listen section I configured listening address to ipaddr="192.168.1.1"
In my configuration I have also listening on 127.0.0.1:18120, but when I check open ports I got:
ss -nlp|grep radiusd
udp UNCONN 0 0 0.0.0.0:40012 0.0.0.0:* users:(("radiusd",pid=22199,fd=9))
udp UNCONN 0 0 127.0.0.1:18120 0.0.0.0:* users:(("radiusd",pid=22199,fd=7))
udp UNCONN 0 0 192.168.1.1:1812 0.0.0.0:* users:(("radiusd",pid=22199,fd=8))
This port 40012 is dynamic allocated after freeradius service restart the number is different.
ss -nlp|grep radiusd
udp UNCONN 0 0 0.0.0.0:42447 0.0.0.0:* users:(("radiusd",pid=26490,fd=9))
udp UNCONN 0 0 127.0.0.1:18120 0.0.0.0:* users:(("radiusd",pid=26490,fd=7))
udp UNCONN 0 0 192.168.1.1:1812 0.0.0.0:* users:(("radiusd",pid=26490,fd=8))
How to get rid of this port? What is a function of it?

This extra port is used for sending and receiving proxy packets. If you are not using proxying you can disable it in radiusd.conf, look for
proxy_requests = yes
$INCLUDE proxy.conf
change it to "no", and comment out the INCLUDE line.
If you want to change the address and/or port that is used, look at the listen sections in e.g. raddb/sites-enabled/default. You can add a new section with type = proxy to specifically set the address and port that is used.

Related

drive_auth() function not creating gargle-oauth token on password submission

R v3.6.2
RStudio Desktop v1.2.5033
R package 'googledrive' v1.0.0
I have written an R script that uploads csv files to a googlesheets account. In order to avoid having to automate this, I have used the drive_auth() function to refresh the OAuth token. Code is simply:
drive_auth(
email = "email#gmail.com",
path = NULL,
scopes = "https://www.googleapis.com/auth/drive",
cache = gargle::gargle_oauth_cache(),
use_oob = gargle::gargle_oob_default(),
token = NULL
)
drive_upload(file, overwrite=TRUE, type="spreadsheet")
On both a mac and a Windows OS machine, this then opens a default browser that asks for login details. When these are correctly entered, the script now has permissions to upload / edit files and googledrive functions subsequently work. It creates an authority token in the file path:
Home/Users/.R/garle/gargle-oauth
However, when attempting to do this on a new laptop that will be used as a server, I am met with the following error messages:
Error: can't get Google credentials.
Are you running googlesheets in a non-interactive session? Consider:
* sheets_deauth() to prevent the attempt to get credentials.
* call 'sheets_auth()' directly with all necessary specifics.
On inspection of the gargle-oauth folder, it has not created an OAuth token, as it did automatically with other machines on the entering of google login details.
I re-ran the programme on the other windows machine after deleting the OAuth token and it worked fine, creating the token again from scratch. I cannot pinpoint the reason why this token is not being created in this instance.
I've since solved this and I'm going to post an answer in case anyone is in a similar problem and comes across this post during a google search.
When initialising a connection with googledrive, the package uses the default port of 1410. It was unable to establish a connection with google because a zombie process was using this port.
To kill this process, open up the windows command prompt (or command line on a mac) as admin and enter the netstat command:
C:\Users>netstat -ano|findstr "PID :1410"
This will (if anything is running on this port) return:
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:1410.0.0.0:0 LISTENING 18264
That number at the bottom right is the process PID, enter that into the following command to kill the process:
taskkill /pid 18264 /f
When running any R googledrive functions, you should now be able to authorise your code to interact with your google account and it will create an OAuth token to save you having to go through this again.
I confirm that this problem also got me on Ubuntu. I resolved it by finding and killing the process on port 1410 (which was also listening on 40167):
me#me:/internal$ netstat -tulpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 894/node
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1410 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:40197 0.0.0.0:* LISTEN 21011/R
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::25 :::* LISTEN -
tcp6 0 0 :::443 :::* LISTEN -
tcp6 0 0 :::1917 :::* LISTEN 1277/node /home/ult
tcp6 0 0 :::3838 :::* LISTEN -
tcp6 0 0 ::1:6379 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
udp 0 0 127.0.0.53:53 0.0.0.0:* -
me#me:/internal$ kill -HUP 21011

Cannot connect to Wordpress docker container.on google cloud platform

Ok so I have read the other connecting to docker container questions and mine does not seem to fit any of the other ones. So here it goes. I have installed docker and docker compose. I built the Wordpress site on a my home machine and am not trying to migrate it to GCP. I got a micro instance and installed everything on there and as far as I can tell everything is up and running as it should be. But when I go to log into the site from the web browser I get -
**This site can’t be reached
xx.xxx.xx.xx refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED**
these are the ports opened up in my .yml file
- "8000:80"</b>
- "443"</b>
- "22"</b>
I have also tried 8080:80 and 80:80 to no availe
and when I check docker port it shows
80/tcp -> 0.0.0.0:32770</br>
80/tcp -> 0.0.0.0:8000</br>
22/tcp -> 0.0.0.0:32771</br>
443/tcp -> 0.0.0.0:443</br>
and when I check netstat from localhost and from another machine I get
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::17500 :::* LISTEN -
udp 0 0 0.0.0.0:49953 0.0.0.0:* -
udp 22720 0 0.0.0.0:56225 0.0.0.0:* -
udp 52224 0 127.0.1.1:53 0.0.0.0:* -
udp 19584 0 0.0.0.0:68 0.0.0.0:* -
udp 46080 0 0.0.0.0:17500 0.0.0.0:* -
udp 214144 0 0.0.0.0:17500 0.0.0.0:* -
udp 35072 0 0.0.0.0:5353 0.0.0.0:* -
udp 9216 0 0.0.0.0:5353 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp6 0 0 :::44824 :::* -
udp6 16896 0 :::5353 :::* -
udp6 3840 0 :::5353 :::*
-
when I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
1c25a8707960 wordpress:latest "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 0.0.0.0:443->443/
tcp, 0.0.0.0:32771->22/tcp, 0.0.0.0:8000->80/tcp, 0.0.0.0:32770->80/tcp wp-site_wordpress_1
96f3c136c746 mysql:5.7 "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 3306/tcp
wp-site_wp-db_1
Also I have both http and https open on my google cloud firewall.
So if I am listening on port 80 and have it mapped to 8000(the port I was connecting to the container on on my dev machine) I do not understand why I can not get to the WP site in the browser. Any help would be greatly appreciated. Also I think I included everything needed for this question. If there is anything else I will be more than happy to post it .
Ok so after a lot of tries I finally figured it out. In the yml file I needed to take out port -"80" and change -"8000:80" to -"80:80" and then remove the old containers and rebuild them.

Apache Zeppelin only listening on tcp6

Because I have just started with Zeppelin, I am a bit lost.
I installed via this page: http://zeppelin.apache.org/docs/0.7.3/install/install.html
After installation Zeppelin appears to only listen on the tcp6 address on port 8080:
ubuntu#ip-10-0-1-164:~$ sudo netstat -lnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:*
LISTEN 1176/sshd
tcp 0 0 0.0.0.0:3306 0.0.0.0:*
LISTEN 1203/mysqld
tcp6 0 0 :::8080 :::*
LISTEN 13719/java
tcp6 0 0 :::22 :::*
LISTEN 1176/sshd
udp 0 0 0.0.0.0:68 0.0.0.0:*
1028/dhclient
I grepped all the installation files and didn't see where it was getting direction for the IP and port (other than template files in conf).
I was wondering if anyone had some more knowledge of Zeppelin.
It's very simple.
Remove ".template" from file name "zeppelin-site.xml.template"
Change port in "zeppelin.server.port"
Restart Zeppelin
Go to localhost:new_port in browser.
Actually, it was fine. It is listening on tcp4 even though it's not showing. The issue was with my SSH port forwarding.

EC2 VPC Instance - Ports are filtered

I've configured my server with a default security group, which has the following Inbound rules:
| Type | Protocol | Port Range | Source |
| All TCP | TCP | 0-65535 | 0.0.0.0/0 |
| All UDP | UDP | 0-65535 | 0.0.0.0/0|
With these rules, netstat shows the following output:
netstat -atn
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:1113 0.0.0.0:* LISTEN
tcp 0 0 10.0.1.31:2113 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2113 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11300 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::5432 :::* LISTEN
tcp6 0 0 :::3306 :::* LISTEN
tcp6 0 0 :::6379 :::* LISTEN
So, in theory, I should be able to connect to port 1113 with TCP from any IP Address. But this is not working, the IP address is showing as filtered, as you can see in the following output:
The only ports that seem to be OK (open and not filtered) are 22 & 80. Here's the output I get when testing them with nmap:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
1113/tcp filtered ltp-deepspace
2113/tcp filtered unknown
3306/tcp filtered mysql
6379/tcp filtered unknown
I even tried adding a custom inbound rule just for my IP and Port 1113, but the result is the same.
I suspect that some firewall is blocking traffic on those PORTS in my instance, but I'm not sure how to check that.
One thing to notice, is that this instance is in a Amazon VPC. However, the network ACL for this instance has the following inbound rule, that should allow income communications from all ports:
|Rule # | Type | Protocol | Port Range | Source | Allow / Deny |
| 100 | ALL Traffic | ALL | ALL | 0.0.0.0/0 |ALLOW |
Any ideas on what could be the issue here?
Thanks a lot for your help!
[I know this is an old post, but I was bitten by the very same thing just today and came across this very question. Expanded to add steps for Windows AMI]
Summary
When you fire up a new EC2 instance from a new AMI there seem to be conditions where the local firewall is set to filter everything except SSH.
Now that might be the default on the newer AMIs, or something at work such as fail2ban or such like. If you are using a Windows AMI, this could be the Windows firewall.
The symptoms are as you describe - you have a public-facing IP address (either directly attached or via Elastic IP), you have permissive Security Groups, and all is otherwise well. An nmap from another working server (NB be careful, AWS don't like people running nmap from EC2 instances even onto your own servers) will show port 22 open but everything else filtered.
Linux
TLDR; The quick fix is probably easy in order to flush the rules:
iptables -F
Ideally, run this first to list what the offending rule is:
iptables -L
But you should have a good look at why it was being set up that way. It's possible something like firewalld is running which is going to monkey with the rules and you have the choice of configuring or disabling it. These will tell you if it's running:
firewall-cmd --status
firewall-cmd --get-services
There are other firewall services, of course.
Once you think you have it right make sure you reboot the server to ensure everything comes up right rather than reverting to a catatonic state (services speaking).
Windows
If you are using a Windows AMI, you will need to adjust the firewalls.
Go to Control Panel > System and Security > Windows Defender Firewall
From here, you could turn it off and rely solely on your AWS security (not recommended) or selectively enable certain apps / ports.
For those who are seeking for an answer. It is because there is an additional firewall in your Linux system. For example, you probably need to do this if you are using Ubuntu: sudo ufw disable.
See this link for more information.
I know this is old post but I think it might help someone else too . I was running RHEL 7.6 got this issue . I had to re enable the firewall and added the ports in the firewall rule . Then it worked like charm .
For a Windows AMI, this could be due to the Windows firewall being enabled. See my edits to #Miles_Gillham's answer for details

Cannot access OpenVAS following installation

I am sure that once I find the issue I am going to feel like a fool, but I have been pouring highlevel debugging into something that I know the answer must be right there.
Same issue on 2 different 'new' CentOS machines, I install OpenVAS, run openvas-check-setup --server a whole bunch of times, follow the instructions till error free, the ports light up but I cannot connect.
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9390 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9391 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9392 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:9329 0.0.0.0:* LISTEN
I see the packets hit the server just fine:
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
10:32:27.119370 IP 10.20.10.47.ds-user > 10.180.10.51.9392: Flags [S], seq 2713892558, win 65535, options [mss 1460,nop,nop,sackOK], length 0
10:32:27.381288 IP 10.20.10.47.ds-mail > 10.180.10.51.9392: Flags [S], seq 2903829103, win 65535, options [mss 1460,nop,nop,sackOK], length 0
But the server never replies:
It's not a firewall:
[root#offtbn ~]# iptables-save
[root#offtbn ~]#
Firewall is empty
I tried all of the OpenVAS ports using http: and https: in every different browser and from multiple machines.
The first OpenVAS server 'did' work for a day, but something changed which is why I built the second machine from scratch. Both have the exact same issue and the exact same symptoms.
/etc/rc.d/init.d/openvas-administrator restart
/etc/rc.d/init.d/openvas-manager restart
/etc/rc.d/init.d/openvas-scanner restart
all run clean
I am really stumped on this one.
the site was having network issues.
From what I could tell, a proxy was breaking headers and somehow this exterior failure was effecting openvas's ability to do a basic login.
Did an install on a different network with the exact same distro and everything went flawless.
Not exactly sure the exact cause.

Resources