I have measured my wlan connection on embedded system with iperf.
I get speed difference in both directions.
I do not know how can I interpret it, and if it is something to fix.
Test setup:
Embedded Platform with USB wlan stick (192.168.1.3):
connected to Access point via WLAN
running iperf -s (server)
Linux PC (192.168.1.2):
connected to Access Point via ETH (cable)
running iperf -c .... -d (client)
Access Point
used only for this measurement. No other traffic
According to ​https://serverfault.com/questions/566737/iperf-csv-output-format
I interpret the result as follows :
[4] client-server 8.13 Mbits/sec
[5] server-client 39.8 Mbits/sec
Why do I get this 5 times speed difference ?
Tue Jan 27 09:11:58 CET 2015
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.3, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.2 port 36557 connected with 192.168.1.3 port 5001
[ 5] local 192.168.1.2 port 5001 connected with 192.168.1.3 port 33851
[ 4] 0.0-10.1 sec 9.80 MBytes 8.13 Mbits/sec
[ 5] 0.0-10.3 sec 48.9 MBytes 39.8 Mbits/sec
Related
We have ubuntu server installed on our desktop machine. It is connected modem with ethernet port. We can access it with ssh via inside of our network. But can not from outside.
Here is what we've done so far:
We have static ip
My professor made this i dont know what it is
Our ubuntu server machine always picks 192.168.1.200
We have port forwarding
when I run ssh maviarge#213.XXXXXXX from our LAN which holds ubuntu server machine
maviarge#213.XXXXXXX's password:
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.4.0-104-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Thu 10 Mar 2022 08:45:36 AM UTC
System load: 0.07 Processes: 166
Usage of /: 2.0% of 438.13GB Users logged in: 1
Memory usage: 2% IPv4 address for docker0: 172.17.0.1
Swap usage: 0% IPv4 address for enp3s0: 192.168.1.200
Temperature: 50.0 C
* Super-optimized for small spaces - read how we shrank the memory
footprint of MicroK8s to make it the smallest full K8s around.
https://ubuntu.com/blog/microk8s-memory-optimisation
0 updates can be applied immediately.
But when I run ssh -v maviarge#213.XXXXXXX from outside of our wifi.
OpenSSH_for_Windows_8.1p1, LibreSSL 3.0.2
debug1: Reading configuration data C:\\Users\\MaviArge/.ssh/config
debug1: Connecting to 213.XXXXXXX [213.XXXXXXX] port 22.
debug1: connect to address 213.XXXXXXX port 22: Connection timed out
ssh: connect to host 213.XXXXXXX port 22: Connection timed out
When I run ping 213.XXXXXXX from outside
Pinging 213.XXXXXXX with 32 bytes of data:
Reply from 213.XXXXXXX: bytes=32 time=67ms TTL=46
Reply from 213.XXXXXXX: bytes=32 time=97ms TTL=46
Reply from 213.XXXXXXX: bytes=32 time=107ms TTL=46
Reply from 213.XXXXXXX: bytes=32 time=124ms TTL=46
Ping statistics for 213.XXXXXXX:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 67ms, Maximum = 124ms, Average = 98ms
Saw this command on internet sudo lsof -i:22 and the output:
sudo lsof -i:22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 3290 root 4u IPv4 38814 0t0 TCP maviarge:ssh->host-213.XXXXXXX.reverse.superonline.net:58124 (ESTABLISHED)
sshd 3375 maviarge 4u IPv4 38814 0t0 TCP maviarge:ssh->host-213.XXXXXXX.reverse.superonline.net:58124 (ESTABLISHED)
sshd 4057 root 3u IPv4 71589 0t0 TCP *:ssh (LISTEN)
sshd 4057 root 4u IPv6 71591 0t0 TCP *:ssh (LISTEN)
sshd 5662 root 4u IPv4 74261 0t0 TCP maviarge:ssh->host-213.XXXXXXX.reverse.superonline.net:60472 (ESTABLISHED)
sshd 5746 maviarge 4u IPv4 74261 0t0 TCP maviarge:ssh->host-213.XXXXXXX.reverse.superonline.net:60472 (ESTABLISHED)
Also nmap scan:
Starting Nmap 7.92 ( https://nmap.org ) at 2022-03-10 05:17 EST
Nmap scan report for host-213.XXXXXXX.reverse.superonline.net (213.XXXXXXX)
Host is up (0.14s latency).
Not shown: 96 closed tcp ports (reset)
PORT STATE SERVICE
22/tcp filtered ssh
25/tcp filtered smtp
5060/tcp filtered sip
5432/tcp open postgresql
Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds
What's wrong?
have you try this
sudo ufw allow from any to any port 22 proto tcp
or
sudo ufw allow ssh
I have a google compute running CentOS 7, and I wrote up a quick test to try and communicate with it over port 9000 (from my home PC) - but I'm unexpectedly getting network errors.
This happens both with my test script (which attempts to send a payload) and even with plink.exe (which I'm just using to check the port availability).
>plink.exe -v -raw -P 9000 <external_IP>
Connecting to <external_IP> port 9000
Failed to connect to <external_IP>: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused
I've added my external IP to googles firewall (https://console.cloud.google.com/networking/firewalls) and set to allow ingress traffic over port 9000 (it's the lowest priority, at 1000)
I also updated firewalld in CentOS to allow TCP traffic over the port:
Redirecting to /bin/systemctl start firewalld.service
[foo#bar ~]$ sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
success
[foo#bar ~]$ sudo firewall-cmd --reload
success
I've confirmed my listener is running on port 9000
[foo#bar ~]$ netstat -npae | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1000 18381 1201/python3
By default, CentOS 7 doesn't use iptables (just to be sure, I confirmed it wasn't running)
Am I missing something?
NOTE: Actual external IP replaced with <external_IP> placeholder
Update:
If I nmap my listener over port 9000 from the CentOS 7 compute instance over a local IP, like 127.0.0.1 I get some results. Interestingly, if I make the same nmap call over the servers external IP -- nadda. So this has to be a firewall, right?
external call
[foo#bar~]$ nmap <external_IP> -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 00:33 UTC
Nmap scan report for <external_IP>.bc.googleusercontent.com (<external_IP>)
Host is up (0.00043s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
Nmap done: 1 IP address (1 host up) scanned in 4.87 seconds
Internal Call
[foo#bar~]$ nmap 127.0.0.1 -Pn
Starting Nmap 6.40 ( http://nmap.org ) at 2020-05-25 04:36 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.010s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
9000/tcp open cslistener
Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds
In this case software running on the backend VM must be listening any IP (0.0.0.0 or ::), your's is listening to "127.0.0.1:9000" and it should be "0.0.0.0:9000".
The way to fix that it's to change the service config to listen to 0.0.0.0 instead of 127.0.0.1 .
Cheers.
I use iperf2. iperf2 has been set in the server mode on STM32 board. Client mode is used on Windows PC.
I'd like to receive transfer bandwidth statistics in upload and download terms.
iperf -c 192.168.21.25 -d -t 5 -f m:
[220] local 192.168.21.1 port 60602 connected with 192.168.21.25 port 5001
[252] local 192.168.21.1 port 5001 connected with 192.168.21.25 port 49155
[ ID] Interval Transfer Bandwidth
[252] 0.0- 5.0 sec 48.5 MBytes 81.3 Mbits/sec
[220] 0.0- 5.0 sec 23.1 MBytes 38.7 Mbits/sec
=========
iperf -c 192.168.21.25 -r -t 5 -f m
[216] local 192.168.21.1 port 60531 connected with 192.168.21.25 port 5001
[ ID] Interval Transfer Bandwidth
[216] 0.0- 5.0 sec 33.9 MBytes 56.8 Mbits/sec
[212] local 192.168.21.1 port 5001 connected with 192.168.21.25 port 49154
[212] 0.0- 5.0 sec 54.9 MBytes 92.1 Mbits/sec
What are the rules to detect upload and download bandwidth in these responses?
[] local port <port> gives the client's port while the connection with gives the server port.
So in example 1, [220] is the traffic from the client to server, and [252] is from the server to the client.
I've installed netperf 2.6 in two sites and trying to run the netperf benchmark, but All I'm getting is zero Throughput... Does anyone knows how to use netperf properly? (I was following the official documentation)
I run this at a server:
./netserver -p xxxxx
the output is:
Starting netserver with host 'IN(6)ADDR_ANY' port 'xxxxx' and family AF_UNSPEC
In the other side I run:
./netperf -s 5 -H a.b.c.d -p xxxxx
The output is:
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to a.b.c.d () port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.00 0.00
any ideas?
A netperf test has two "connections." The first is the "control connection" over which information about the test setup and result is exchanged. For the benchmarking itself a "data connection" is used. The control connection will use the control port you've specified with the global "-p" option. The data connection will by default use a port number chosen by the networking stack where the netserver runs.
Both have to be open through firewalls for a test to be successful.
If only the control port is open, you will see the test banners get displayed because the control connection is established. Since the data connection cannot be established, that will report zero.
You can specify an explicit port number for the data connection with a test-specific "-P" option. So, if you opened a second port number, 9992, you would start the netserver as before, and then your netperf command would become:
./netperf -s 5 -H a.b.c.d -p xxxxx -- -P ,9992
That comma is important. The test-specific -P option allows specifying both the local and remote port numbers for the data connection. The remote port number follows a comma.
terminal1:
$ sudo netserver -D -4 -L 0.0.0.0 -p 9991
Starting netserver with host '0.0.0.0' port '9991' and family AF_INET
terminal2:
$ sudo netperf -H 192.168.2.103 -l 60 -t TCP_STREAM
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.103 (192.168.2.103) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 524288 524288 60.02 89.66
Context: I have setup a demo cloud in my laptop using VirtualBox and have two virtual machines - one has the client and other as server. Create a small instance using the server and running instance is TinyLinux.
Problem: How shall I send data to that instances and stores in that instance.
Some pointers would be very helpful.
Well, with libvirt, you have several options how to do the networking. The default is to use NATing. In that case libvirt creates a bridge and virtual nics for every so configured virtual nic:
$ brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.525400512fc8 yes virbr0-nic
vnet0
Then sets-up iptables rules to NAT (masquerade) the packets on such bridge.
Chain POSTROUTING (policy ACCEPT 19309 packets, 1272K bytes)
pkts bytes target prot opt in out source destination
8 416 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
216 22030 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
11 460 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
enables forwarding
# cat /proc/sys/net/ipv4/ip_forward
1
and spawns DHCP server (dnsmasq is both DHCP and DNS in one)
ps aux | grep dnsmasq
nobody 1334 0.0 0.0 13144 568 ? S Feb06 0:00 \
/sbin/dnsmasq --strict-order --local=// --domain-needed \
--pid-file=... --conf-file= --except-interface lo --bind-dynamic \
--interface virbr0 --dhcp-range 192.168.122.2,192.168.122.254 \
--dhcp-leasefile=.../default.leases --dhcp-lease-max=253 --dhcp-no-override
If I had two virtual network interfaces (two machines with one NIC on same network, there would be two nics in that bridge. The machines gets the address from the range 192.168.122.2-254 from the dnsmasq DHCP server. So if you know that addresses, you should be able to connect from one to the other VM as both are on same broadcast domain (connected by the bridge). To the outside of your computer the machines all appear as "one IP address".
The more "advanced" option is to use Bridged networking, which again puts the virtual interfaces into one bridge, but it puts some physical device there as well, so the machines appears as if there were several machines connected to some switch...
I usually bind a web server to the gateway interface the VMs use to NAT with the physical host.