VPN connection results in extremely slow connection (OpenVPN) - vpn

I am currently in the and connecting to a research institute in India using OpenVPN. The client config file says I am using TCP, however, I tried with UDP too.
My issue is that my connection is seriously degraded to about 1 Mbps when I connect to the VPN (see speedtest results below). Please suggest if there are any ways to improve the same. I have read that many people have had this problem and there is no single solution that can solve it. I tried suggestions from various posts, like https://serverfault.com/questions/686286/very-low-tcp-openvpn-throughput-100mbit-port-low-cpu-utilization, to change the buffer size and txqueuelen. I had also set
sndbuf 0
rcvbuf 0
and with txqueuelen = 4000 but there was no improvement in the connection speed (have also tried other combinations of these variables). MTU is set to 1500.
The server uses CentOS 7 and I am using Ubuntu 18.04.6 LTS. The version on OpenVPN that I am using: OpenVPN 2.5.7 x86_64-pc-linux-gnu.
(I am new to the technicalities of VPN even though I have used it before.)
Speedtest without VPN:
Testing from University of <hidden> ...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bresco Broadband (Columbus, OH) [263.97 km]: 23.339 ms
Testing download speed..........................................
Download: 542.38 Mbit/s
Testing upload speed............................................
Upload: 611.33 Mbit/s
Speedtest with VPN:
Retrieving speedtest.net configuration...
Testing from <hidden> Communications (<hidden IP address>)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by BBNL (Bangalore) [2.23 km]: 649.717 ms
Testing download speed................................................................................
Download: 0.96 Mbit/s
Testing upload speed................................................................................................
Upload: 2.19 Mbit/s
OpenVPN client config file:
dev tun
proto TCP
persist-tun
persist-key
cipher AES-256-CBC
ncp-ciphers AES-256-GCM:AES-128-GCM
auth SHA1
tls-client
client
resolv-retry infinite
remote <hidden> 443 tcp
verify-x509-name "<hidden>-VPN" name
auth-user-pass
pkcs12 pfSense-TCP4-443-<username hidden>.p12
tls-auth pfSense-TCP4-443-<username hidden>-tls.key 1
remote-cert-tls server
sndbuf 512000
rcvbuf 512000
txqueuelen 1000

I was able to find this link that explains why this may be the case and offers a possible solution. Here is a reddit thread that offers a few more solutions in the comments. However, as I was researching I found that this is a common issue with OpenVPN (many articles/threads discussing that it slows down or it has speed issues)

Related

Why cant I connect more than 8000 clients to MQTT brokers via HAProxy?

I am trying to establish 10k client connections(potentially 100k) with my 2 MQTT brokers using HAProxy as a load balancer.
I have a working simulator(using Java Paho library) that can simulate 10k clients. On the same machine I run 2 MQTT brokers in docker. For LB im using another machine with virtual image of Ubuntu 16.04.
When I connect directly to a MQTT Broker those connections are established without a problem, however when I use HAProxy I only get around 8.8k connections, while the rest throw: Error at client{insert number here}: Connection lost (32109) - java.net.SocketException: Connection reset. When I connect simulator directly to a broker (Same machine) about 20k TCP connections open, however when I use load balancer only 17k do. This leaves me thinking that LB is causing the problem.
It is important to add that whenever I run the simulator I'm unable to use the browser (Cannot connect to the internet). I havent tested if this is browser only, but could that mean that I actually run out of ports or something similar and the real issue here is not in the LB?
Here is my HAProxy configuration:
global
log /dev/log local0
log /dev/log local1 notice
maxconn 500000
ulimit-n 500000
maxpipes 500000
defaults
log global
mode http
timeout connect 3h
timeout client 3h
timeout server 3h
listen mqtt
bind *:8080
mode tcp
option tcplog
option clitcpka
balance leastconn
server broker_1 address:1883 check
server broker_2 address:1884 check
listen stats
bind 0.0.0.0:1936
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
This is what MQTT broker shows for every successful/unsuccessful connection
...
//Successful connection
1613382861: New connection from xxx:32850 on port 1883.
1613382861: New client connected from xxx:60974 as 356 (p2, c1, k1200, u'admin').
...
//Unsuccessful connection
1613382699: New connection from xxx:42861 on port 1883.
1613382699: Client <unknown> closed its connection.
...
And this is what ulimit -a shows on LB machine.
core file size (blocks) (-c) 0
data seg size (kb) (-d) unlimited
scheduling priority (-e) 0
file size (blocks) (-f) unlimited
pending signals (-i) 102355
max locked memory (kb) (-l) 82000
max memory size (kb) (-m) unlimited
open files (-n) 500000
POSIX message queues (bytes) (-q) 819200
real-time priority (-r) 0
stack size (kb) (-s) 8192
cpu time (seconds) (-t) unlimited
max user processes (-u) 500000
virtual memory (kb) (-v) unlimited
file locks (-x) unlimited
Note: The LB process has the same limits.
I followed various tutorials and increased open file limit as well as port limit and TCP header size, etc. The number of connected users increased from 2.8k to about 8.5-9k (Which is still way lower than the 300k author of the tutorial had). ss -s command shows about 17000ish TCP and inet connections.
Any pointers would greatly help!
Thanks!
You can't do a normal LB of MQTT traffic, as you can't "pin" the connection based on the MQTT Topic. If you send in a SUBSCRIBE to Broker1 for Topic "test/blatt/#", but the next client PUBLISHes to Broker2 "test/blatt/foo", then if the two brokers are not bridged, your first subscriber will never get that message.
If your clients are terminating the TCP connection sometime after the CONNECT, or the HAproxy is round-robin'ing the packets between the two brokers, you will get errors like this. You need to somehow persist the connections, and I don't know how you do that with HAproxy. Non-free LB's like A10 Thunder or F5 LTM can persist TCP connections...but you still need the MQTT brokers bridged for it all to work.
Turns out I was running out of resources on my computer.
I moved simulator to another machine and managed to get 15k connections running. Due to resource limits I cant get more than that. Computer thats running the serverside uses 20/32GB of RAM and the computer running simulator used 32/32GB for approx 15k devices. Now I see why running both on the same computer is not an option.

How to signal RPi-WebRTC-Streamer External IP address to the coTurn server?

At the moment, my RWS (RPi-WebRTC-Streamer) application works on my local network. I am now trying to connect it to my hosted coTURN server.
My main_rws_orig.js is pointing at my coTurn server:
var localTestingUrl = "ws://10.0.0.11:8889/rws/ws";
//var pcConfig = {"iceServers": [{"urls": "stun:stun.l.google.com:19302"}]};
var pcConfig = {"iceServers": [{"urls": "stun:172.104.xxx.xxx:3478"}]};
In using https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/
I get the following when testing TURN and STUN:
0.009 rtp host 3376904655 udp c7f50fee-cbd0-4332-ae51-a013c4d35c5e.local 41771 126 | 30 | 255
0.091 rtp srflx 842163049 udp 42.116.95.19 41771 100 | 30 | 255
0.158 rtp relay 3617893847 udp 172.104.xxx.xxx 17857 2 | 30 | 255
39.809 Done
39.811
My coTurn web configuration tool is working also.
I have read about a signalling server, but have not found much documentation regarding it. I am just trying to figure out how to finish. How do connect my RWS application to the outside world using coturn.
Any tips or information will be greatly appreciated.
A signalling server is basically a service that sends the ICE candidates betweens the peers of your conversation. Usually it uses Websockets for this communication. The ICE candidates may include the CoTurn server credentials you provide to the WebRTC Object in JavaScript. But you need to share all candidates between the participants and for THIS you need the signalling server. You can use any language that supports full websockets communication like NodeJS or Java (not PHP!).
Take a look at this article, it describes this very well: https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/#how-can-i-build-a-signaling-service
Except you explicitly want to use Peer-To-Peer WebRTC, I recommend you also to take a look at the tutorials of Kurento Media Server to get a better understanding of this principle, the NodeJS/Java is signalling between Kurento and your browser. Please note by using a media server it will be always in the middle between the participants, what has advantages like reducing the network usage of each participant + recording the whole conversation on the media server, but also disadvantages like no end-to-end encryption.
NodeJS example: https://doc-kurento.readthedocs.io/en/6.14.0/tutorials/node/tutorial-one2one.html
Java example: https://doc-kurento.readthedocs.io/en/6.14.0/tutorials/java/tutorial-one2one.html

Solaris tcp_time_wait_interval configuration

In my Solaris server, I have an HTTP Server which handle many incoming connections. In my server logic, it closes connection from client manually so that many TIME_WAIT status appear when I call command netstat -an in my server.
So that I change the tcp_time_wait_interval to 10 second with command:
ndd -set /dev/tcp tcp_time_wait_interval 10000
But I read from user guide, it says : "Do not set the value lower than 60 seconds".
Does anyone know why Oracle recommend that?
The user guide URL is : http://docs.oracle.com/cd/E19455-01/806-6779/chapter4-51/index.html
Was told my Oracle engineer with a very heavy trans load in thousands/sec can set to as low as 10 ms on Solaris 11/11.1

Warning in Jetty 9 Http requests - Under Utilization of N/W in local loopback

I am trying to run Jetty 9 on Ubuntu 12.10 (32 bit). The JVM i am using in JDK 1.7.0_40. I have setup a rest service on my server that uses RestLib. The rest service is a POST method that just receives the data and does no processing with it and responds a success.
I want to see what is the maximum load the Jetty9 server will take with the given resources. I have a Intel i5 processor box with 8 GB memory. I have setup a Jmeter to test this rest in the localhost setting. I know this is not advisable but i would like to know this number (just out of curiosity).
When I run the JMeter to test this POST method with 1 MB of payload data in the body, i am getting a through put of around 20 (for 100 users).
I measured the the bandwidth using iperf to begin with
iperf -c 127.0.0.1 -p 8080
------------------------------------------------------------
Client connecting to 127.0.0.1, TCP port 8080
TCP window size: 167 KByte (default)
[ 3] local 127.0.0.1 port 44130 connected with 127.0.0.1 port 8080
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 196 MBytes 165 Mbits/sec
the number 165 MB seems ridiculously small for me but that's one observation.
I ran the server with StatisticsHandler enabled and was observing the request mean time. Also was observing the system resources using nmon monitoring tool.
The CPU usage was around 20 % (overall), 4GB of free memory and the number of threads in the server (monitored using jconsole) around 200 (i had specified max thread count as 2000 in start.ini file).
Jmeter was configured to bombard repeatedly.
I observed the network usage in local loopback interface in nmon tool and it was around
30 MB. This was inline with the iperf data quoted earlier.
I tried the same experiment with weblogic(using JDK 1.6) and it was using nearly 250 MBps in lo interface. I had explicitly disabled tcp sync cookies in the sysctl config to avoid the limitation due to system thinking the test as DOS attack.
Please help me comprehend this numbers. Am I missing something here in the config. The n/w seems to be a limiting factor here but since it is a loopback interface there is no physical limitation as proved by the Weblogic case.
Please help me understand what am I doing wrong in the Jetty 9 case.
Also I am getting this warning in Jetty9 logs very frequently
WARN:oejh.HttpParser:qtp14540840-309: Parsing Exception: java.lang.IllegalStateException: too much data after closed for HttpChannelOverHttp#1dee6d3{r=1,a=IDLE,uri=-}
This question is effectively being answered on this mailing list thread:
http://dev.eclipse.org/mhonarc/lists/jetty-users/msg03906.html

fsc.exe is very slow because it tries to access crl.microsoft.com

When I run F# compiler - fsc.exe - on our build server it takes ages (~20sec) to run even when there are no input files. After some investigation I found out that it's because the application tries to access crl.microsoft.com (probably to check if some certificates aren't revoked). However, the account under which it runs doesn't have an access to the Internet. And because our routers/firewalls/whatever just drops the SYN packets, fsc.exe tries several times before giving up.
The only solution which comes to mind is to set clr.microsoft.com to 127.0.0.1 in hosts file but it's pretty nasty solution. Moreover, I'll need fsc.exe on our production box, where I can't do such things. Any other ideas?
Thanks
Come across this myself - here are some links... to better descriptions and some alternatives
http://www.eggheadcafe.com/software/aspnet/29381925/code-signing-performance-problems-with-certificate-revocation-chec.aspx
I dug up this form an old MS KB for Exchange when we hit it... Just got the DNS Server to reply as stated (might be the solution for your production box.)
MS Support KB
The CRL check is timing out because it
never receives a response. If a router
were to send a “no route to host” ICMP
packet or similar error instead of
just dropping the packets, the CRL
check would fail right away, and the
service would start. You can add an
entry to crl.microsoft.com in the
hosts file or on the DNS server and
send the packets to a legitimate
location on the network, such as
127.0.0.1, which will reject the connection..."

Resources