AB load testing on local ip or domain name? - nginx

I am using digitalocean as a vps for my webserver.
I added a second droplet with ubuntu 18 that is part of the private network (digitalocean function) with the web server.
I am using cloudflare as my dns provider and also using their ssl certificates.
What is the most accurate load test with ab (**please note the http/https in the example below):
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://www.example.com/
Request per second : 12.66
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://www.example.com/
Request per second : 60.90
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://private.network.local.ip/
Request per second : 36.70
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://private.network.local.ip/
Request per second : 1849
How should I use ab with http or https and with domain or local ip?

Well-behaved load test should represent real-life application usage as close as possible, otherwise it doesn't make sense. So you should use the same settings as real users of your application will use, my expectations are:
domain name instead of IP address
https protocol
Any reason for comparing response time of your application with the http://example.com which is a live website? You should be comparing DNS hostname of your application with the IP address of your application, in this case the results should be the same
ab is not the best tool for simulating real users activity, it basically "hammers" a single URL which doesn't represent real user behaviour, real users:
establish SSL session once, further communication is being made over this channel
send HTTP Headers which may trigger response compression reducing response size
have HTTP Cache implemented in their browsers so embedded resources like images, scripts, styles, fonts, etc. are being requested only once
have Cookies which represent user session
assuming all above I would recommend switching to a more advanced load testing tool which is capable of acting like a real browser

Related

Kong LoadBalancer over Kubesphere

I installed Kong (Kong proxy+kong ingress controller) over Kubernetes/Kubesphere cluster with Istio mesh inside, and I added annotations and ingress types needed, so am able to access only the Kong Proxy at node exposed IP and port, but am unable neither add rules nor access Admin GUI or do any kind of configuration, every request I do to my Kong end-point like
curl -i -X GET http://10.233.124.79:8000/rules
or any kind of request to the proxy, I get the same response of:
Content-Type: application/json; charset=utf-8 Connection: keep-alive
Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.2.0
{"message":"no Route matched with those values"}
Am not able to invoke Admin API, its pod-container is only listening to 127.0.0.1, my environment var's for kong-proxy pod
KONG_PROXY_LISTEN
0.0.0.0:8000, 0.0.0.0:8443 ssl http2
KONG_PORT_MAPS
80:8000, 443:8443
KONG_ADMIN_LISTEN
127.0.0.1:8444 ssl
KONG_STATUS_LISTEN
0.0.0.0:8100
KONG_DATABASE
off
KONG_NGINX_WORKER_PROCESSES
2
KONG_ADMIN_ACCESS_LOG
/dev/stdout
KONG_ADMIN_ERROR_LOG
/dev/stderr
KONG_PROXY_ERROR_LOG
/dev/stderr
And env. var's for ingress-controller:
CONTROLLER_KONG_ADMIN_URL
https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
true
CONTROLLER_PUBLISH_SERVICE
kong/kong-proxy
So how to be able to expose Admin GUI over the mesh over the nodeport and how to able to invoke Admin API, to add rules, etc?
Yes, first you should add rules.
You can directly add routers in KubeSphere. See the documentation for more info.

curl CONNECT method without path (destination hostname and port instead)

I want to test proxy server. In order to make https request, browser sends CONNECT method beforehand (e.g. like Firefox does, when proxy is specified).
I can not achieve/send the same result in curl:
Following has root slash /www.example.com:443:
curl -X CONNECT http://proxy_host:proxy_port/www.example.com:443
Following will not work (without slash):
curl -X CONNECT http://proxy_host:proxy_portwww.example.com:443
Following is not what I want:
curl -X CONNECT http://proxy_host:proxy_port/some_path
So the first line of HTTP data should be CONNECT www.example.com:443 HTTP/1.1 but not CONNECT /www.example.com:443 HTTP/1.1 like curl sends in this case.
Maybe this question also related some-how, if I would know how to not send path.
NOTE! I do not want to use curl -x http://proxy_host:proxy_port https://www.example.com, because this option/flag -x does not work with custom SSL certificates --cacert ... --key ... --cert ....
Any ideas how to send plain header data or not specify path, or specify host and port as a path?
(-X simply replaces the string in the request so of course setting it to CONNECT will not issue a proper CONNECT request and will certainly not make curl handle it correctly.)
curl will do a CONNECT by itself when connecting to a TLS server through a HTTP proxy, and even though you claim -x breaks the certificate options that is an incorrect statement. The --cacert and other options work the same even when the connection is done through a HTTP proxy.
You can also make curl do a CONNECT trough a HTTP(S) proxy for other protocols by using -p, --proxytunnel - also in combination with -x.

Why varnish returns content-length: 0?

I have the following setup:
url -> load balancer -> nginx[1-2] -> varnish[1-2] -> nginx[1-2] (+app)
where first nginx uses the second nginx as backup if varnish fails.
When I perform curl -I http... I get content-length: 0 response. However, if I stop both Varnishes (6.0.2) I get some real number instead of 0. My vcl does not manipulate content-length and I see no other setup that would suggest it.
Moreover, if Varnish is ON and I perform multiple curls (10678 to be exact) I would get 14 responses with content-lentgh different than 0.
The two questions are:
Is content-length: 0 expected from varnish?
Is it possible that varnish fails to setup a connection once in a while and traffic gets routed to nginx directly? No errors in logs, though.

Tensorflow Serving: Need to make HTTP request to TF Server which accepts only gRPC requests

My client is on a server where I can make only HTTP request to any server. The Tensorflow is hosted in an AWS machine which accepts the only gRPC requests. Looking for some leads to make this communication happen?
EDIT: 12 th June 2018
TF officially releases REST API for serving
https://www.tensorflow.org/serving/api_rest
They use this particular example: half_plus_three
Server:
$ tensorflow_model_server --rest_api_port=8501 \
--model_name=half_plus_three \
--model_base_path=$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_three/
Client:
$ curl -d '{"instances": [1.0,2.0,5.0]}' -X POST http://localhost:8501/v1/models/half_plus_three:predict
{
"predictions": [3.5, 4.0, 5.5]
}
Which language/platform is your server running?
OTOH, TF is adding REST support now.

tcpdump not showing HTTP requests

I'm trying to use tcpdump to identify which IP address a particular person is coming from but I'm not seeing the HTTP commands as various web site show. I've used the following to set up tcpdump:
nohup tcpdump -i eth0 -P in -nn -n -tttt -w /home/tcpdump/port80.log -C 100 -W 50 "port 80" > /home/tcpdump/nohup.log 2>&1 &
And I'm peridoically checking the file using
tcpdump -r port80.log00 -n -nn -A
I'm connecting to the following URL from a web browser:
http://10.10.0.50?test
And I was expecting to see a bunch of HTTP "GET" commands but tcpdump doesn't seem to be showing me the incoming messages. Instead I just get something like
16:21:35.708250 IP 10.10.0.222.55924 > 10.10.0.50.80: Flags [S], seq 1869638484, win 8192, options [mss 1340,nop,nop,sackOK], length 0
E..0#H#....\
..
.2.t.PopkT....p. ........<....
Looking at other info on using tcpdump for HTTP logging I'm should be seeing the "GET" command after the first few bytes of garbage. There's no actual web server running - I'm only interested in seeing the incoming request as a test, hence the "?test" on the end to help me search the logs for the right thing. I don't see that that's the issue tho'.
Any help very gratefully received.

Resources