I'm trying to debug a new server I ordered at OVH.com and they insist everything is working properly even though it times out when doing a curl request towards for an example github.com (times out 9 in around 10 tries)
curl -L -v https://github.com
I get
* Rebuilt URL to: https://github.com/
* Trying 140.82.118.4...
* connect to 140.82.118.4 port 443 failed: Connection timed out
* Failed to connect to github.com port 443: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to github.com port 443: Connection timed out
Even when I set up NGINX sever, site timeouts almost every second request
So I thought perhaps DHCP server can be an issue so I checked it and I see this from (var/lib/dhcp..)
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 6 2020/03/28 02:16:19;
rebind 6 2020/03/28 13:47:57;
expire 6 2020/03/28 16:47:57;
}
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 5 2020/03/27 16:51:54;
rebind 5 2020/03/27 16:51:54;
expire 5 2020/03/27 16:51:54;
}
I tried getting a new one by doing this command but nothing changes, still the same as above
sudo dhclient -r
Am I looking at the DHCP wrong or does it look normal? For the record my public IP on this dedicated starts with 5 not 1 and it is run on Ubuntu 16.04 LTS
What is the offer you have at OVH ? They usually don't give private IP to dedicated server or virtual private server, so that's quite odd.
You may want to collect some trace to check what is going wrong with tools like :
tcptraceroute to check if the path to a domain on port 80 or 443
looks strange
ping to be able to see if there packet loss
tcpdump to capture raw network packet while a timeout is occuring to see what's going on
That's a good start and may also help you go back to OVH Support and prove them there's something wrong.
I need to load balance requests based on the requested URI. E.g.:
requests to http://website.com/* should go to web-server1, 2 and 3.
requests to http://website.com/api should go to api-server1, 2 and 3.
Currently no matter the path/URI all requests go to web-server1-3. This is how it is setup in all my 3 haproxy hosts:
frontend fe
default_backend web-servers
backend web-servers
balance leastconn
server web-server-1 1.1.1.1:80 check weight 1
server web-server-2 1.1.1.2:80 check weight 1
server web-server-3 1.1.1.3:80 check weight 1
Both web and api services are running in the same host (i.e., web-server-1 to 3), with JBoss. Recently, I decided to split the web and api services so I could load balance according to the URI, as I mentioned in the begining.
So, now I have a total of 6 servers:
web-server-1 to 3 (1.1.1.1-3:80)
api-server-1 to 3 (1.1.1.4-6:8088)
To do this I came up with 2 different options:
1) add 3 nginx hosts. The haproxy configuration would look like this:
backend nginx-servers
balance leastconn
server nginx-1 1.1.1.7:80 check weight 1
server nginx-2 1.1.1.8:80 check weight 1
server nginx-3 1.1.1.9:80 check weight 1
And now each nginx host routes based on the URI, such as:
upstream web-servers {
server 1.1.1.1:80;
server 1.1.1.2:80;
server 1.1.1.3:80;
}
upstream api-servers {
server 1.1.1.4:8088;
server 1.1.1.5:8088;
server 1.1.1.6:8088;
}
server {
location ~ "/" {
proxy_pass http://web-servers;
proxy_set_header Host $host;
}
location ~ "/api" {
proxy_pass http://api-servers;
}
}
2) the alternative using only haproxy would be:
frontend fe
acl website_domain req.hdr(host) -i website.com
acl route_api path -i -m beg /api
use_backend api-servers if route_api
use_backend web-servers if website_domain !route_api
backend web-servers
balance leastconn
server web-server-1 1.1.1.1:80 check weight 1
server web-server-2 1.1.1.2:80 check weight 1
server web-server-3 1.1.1.3:80 check weight 1
backend api-servers
balance leastconn
server api-server-1 1.1.1.4:8088 check weight 1
server api-server-2 1.1.1.5:8088 check weight 1
server api-server-3 1.1.1.6:8088 check weight 1
However, with this second option when I access http://website.com/ all my api requests return http/404. How is this second approach different from the first one (that actually works)?
I use haproxy as tcp balancer for my servers. There are a few hundred non-zero Send-Q connections between haproxy and clients. And there are many 'cD' flag in haproxy log. Now many server responses reach clients very slowly(more than 10 seconds). Is it caused by clients not receiving data? Or haproxy server does not work properly? Or haproxy server reaches bandwidth limit? What can I do to find the reason?
#455 non-zero Send-Q connection
ubuntu#ip-172-31-19-218:~$ netstat -atn|awk '{if($3>0) print $0}'|wc -l
455
#Top five Send-Q connections
ubuntu#ip-172-31-19-218:~$ netstat -atn|awk '{if($3>0) print $0}'|sort -k3nr|head -n 5
tcp 0 27292 172.31.19.218:12135 :47685 ESTABLISHED
tcp 0 22080 172.31.19.218:12135 :11817 ESTABLISHED
tcp 0 21886 172.31.19.218:12135 :12755 ESTABLISHED
tcp 0 21584 172.31.19.218:12135 :8753 ESTABLISHED
#many 'cD' flags in haproxy log
ubuntu#ip-172-31-19-218:/var/log$ awk '{print $12}' haproxy.log | sort | uniq -c
3
7525 --
**4687 cD**
526 CD
1 /run/haproxy.pid
3 SD
#some 'cD' flag logs
[27/Sep/2017:10:04:11.791] game nodes/s23 1/1/424425 34577 cD 4130/4130/4130/154/0 0/0
[27/Sep/2017:10:09:59.272] game nodes/s34 1/0/77777 3387 cD 4129/4129/4129/165/0 0/0
[27/Sep/2017:09:55:18.557] game nodes/s13 1/0/958654 84303 cD 4128/4128/4128/173/0 0/0
[27/Sep/2017:10:09:34.121] game nodes/s15 1/0/103309 3573 cD 4127/4127/4127/168/0 0/0
#haproxy config
ubuntu#ip-172-31-19-218:/var/log$ cat /etc/haproxy/haproxy.cfg
global
daemon
maxconn 200000
log 127.0.0.1 local0
defaults
maxconn 200000
timeout connect 5000
timeout client 60000
timeout server 60000
listen game
bind *:12135
mode tcp
option tcplog
log global
balance roundrobin
default_backend nodes
backend nodes
server s11 172.31.20.23:12137
....
i want to reject the connection if the user spams the server with requests. My current config looks like this:
frontend http_front
bind *:80
log global
stick-table type ip size 1m expire 1m store gpc0,http_req_rate(10s)
# Increase gpc0 if requests in last 10s where greater than 10
acl conn_rate_abuse src_http_req_rate gt 10
acl mark_as_abuser src_inc_gpc0 gt 0
tcp-request connection track-sc1 src
# Reject if gpc0 greater than 1
tcp-request connection reject if conn_rate_abuse mark_as_abuser
default_backend http_back
The Socket- Output looks like this
0x1e455c0: key=10.23.27.55 use=0 exp=51149 gpc0=0 http_req_rate(10000)=422
What am i doing wrong?!
Edit://
With this code it works, but shouldnt it work with only the code above?
backend http_back
balance roundrobin
acl abuse src_http_req_rate(http_front) ge 10
tcp-request content reject if abuse
server test1 ip1:80 check
server test2 ip2:80 check
HA-Proxy version 1.6.4 2016/03/13
I am setting up simple tcp connection routing using HAProxy acl's. The idea is to route connections depending on request content having two flavors: read and write requests.
For testing I made a simple tcp client/server setup using perl. Strangely enough about 10-40% of the ACL's fail and are sent to the default backend.
The ACL's should find the substring 'read' or 'write' and route accordingly, but this is not allways the case.
Sending a read/write request using nc (netcat) has the same effect.
I tested this configuration with mode=http and everything works as expected.
I also tested with reg, sub and bin, to no avail.
The example server setup is as follows:
HAProxy instance, listens on port 8000
Client (creates tcp connection to proxy and sends user input (read/write string) to server through port 8000, after which it closes the connection)
Server1 (write server), listens on port 8001
Server2 (read server), listens on port 8002
Server3 (default server), listens on port 8003
My HAProxy configuration file looks is:
global
log /dev/log local0 debug
#daemon
maxconn 32
defaults
log global
balance roundrobin
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend tcp-in
bind *:8000
tcp-request inspect-delay 3s
acl read req.payload(0,4) -m sub read
acl write req.payload(0,5) -m sub write
use_backend read_servers if read
use_backend write_server if write
default_backend testFault
backend write_server
server server1 127.0.0.1:8001 maxconn 32
backend read_servers
server server2 127.0.0.1:8002 maxconn 32
backend testFault
server server3 127.0.0.1:8003 maxconn 32
The client code (in perl):
use IO::Socket::INET;
# auto-flush on socket
#$| = 1;
print "connecting to the server\n";
while(<STDIN>){
# create a connecting socket
my $socket = new IO::Socket::INET (
PeerHost => 'localhost',
PeerPort => '8000',
Proto => 'tcp',
);
die "cannot connect to the server $!\n" unless $socket;
# data to send to a server
$req = $_;
chomp $req;
$size = $socket->send($req);
print "sent data of length $size\n";
# notify server that request has been sent
shutdown($socket, 1);
# receive a response of up to 1024 characters from server
$response = "";
$socket->recv($response, 1024);
print "received response: $response\n";
$socket->close();
}
The server (perl code):
use IO::Socket::INET;
if(!$ARGV[0]){
die("Usage; specify a port..");
}
# auto-flush on socket
$| = 1;
# creating a listening socket
my $socket = new IO::Socket::INET (
LocalHost => '0.0.0.0',
LocalPort => $ARGV[0],
Proto => 'tcp',
Listen => 5,
Reuse => 0
);
die "cannot create socket $!\n" unless $socket;
print "server waiting for client connection on port $ARGV[0]\n";
while(1){
# waiting for a new client connection
my $client_socket = $socket->accept();
# get information about a newly connected client
my $client_address = $client_socket->peerhost();
my $client_port = $client_socket->peerport();
print "connection from $client_address:$client_port\n";
# read up to 1024 characters from the connected client
my $data = "";
$client_socket->recv($data, 1024);
print "received data: $data\n";
# write response data to the connected client
$data = "ok";
$client_socket->send($data);
# notify client that response has been sent
shutdown($client_socket, 1);
$client_socket->close();
print "Connection closed..\n\n";
}
$socket->close();
Binary data in haproxy is tricky. Probably some bug, but the following worked for me on haproxy 1.7.9.
I am trying to build a thrift proxy server which can route to appropriate backend based on the user_id in the payload.
frontend thriftrouter
bind *:10090
mode tcp
option tcplog
log global
log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq captured_user:%[capture.req.hdr(0)] req.len:%[capture.req.hdr(1)]"
tcp-request inspect-delay 100ms
tcp-request content capture req.payload(52,10) len 10
tcp-request content capture req.len len 10
tcp-request content accept if WAIT_END
acl acl_thrift_call req.payload(2,2) -m bin 0001 # Thrift CALL method
acl acl_magic_field_id req.payload(30,2) -m bin 270f # Magic field number 9999
# Define access control list for each user
acl acl_user_u1 req.payload(52,10) -m sub |user1|
acl acl_user_u2 req.payload(52,10) -m sub |user2|
# Route based on the user. No default backend so that one always has to set it
use_backend backend_1 if acl_user_u1 acl_magic_field_id acl_thrift_call
use_backend backend_2 if acl_user_u2 acl_magic_field_id acl_thrift_call
When matching binary data in acl, make sure you're looking at the right number of bytes, for substring to work properly. Or use the hex conversion method and match on hex bytes.
Dont I feel silly. Re-reading the HAProxy documentation I found the following directive (fetch method) that fixes the issue:
tcp-request content accept if WAIT_END
That solved the unexpected behaviour.