nginx blocking request till current request finishes - nginx

Boiling my question down to the simplest possible: I have a simple Flask webserver that has a GET handler like this:
#app.route('/', methods=['GET'])
def get_handler():
t = os.environ.get("SLOW_APP")
app_type = "Fast"
if t == "1":
app_type = "Slow"
time.sleep(20)
return "Hello from Flask, app type = %s" % app_type
I am running this app on two different ports: one without the SLOW_APP environment variable set on port 8000 and the other with the SLOW_APP environment variable set on port 8001.
Next I have an nginx reverse proxy that has these two appserver instances in its upstream. I am running everything using docker so my nginx conf looks like this:
upstream myproject {
server host.docker.internal:8000;
server host.docker.internal:8001;
}
server {
listen 8888;
#server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
It works except that if I open two browser windows and type localhost, it first hits the slow server where it takes 20 seconds and during this time the second browser appears to block waiting for the first request to finish. Eventually I see that the first request was serviced by the "slow" server and the second by the "fast" server (no time.sleep()). Why does nginx appear to block the second request till the first one finishes?

No, if the first request goes to the slow server (where it takes 20 sec) and during that delay if I make a request again from the browser it goes to the second server but only after the first is finished.
I have worked with our Engineering Team on this and can share the following insights:
Our Lab environment
Lua
load_module modules/ngx_http_lua_module-debug.so;
...
upstream app {
server 127.0.0.1:1234;
server 127.0.0.1:2345;
}
server {
listen 1234;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by fast")
ngx.say("accepted by fast")
}
}
}
server {
listen 2345;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by slow")
ngx.say("accepted by slow")
ngx.sleep(5);
}
}
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
This is the same setup as it would be with another 3rd party application we are proxying traffic to. But I have tested the same with an NGINX configuration shared in your question and two NodeJS based applications as upstream.
NodeJS
Normal
const express = require('express');
const app = express();
const port = 3001;
app.get ('/', (req,res) => {
res.send('Hello World')
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
Slow
const express = require('express');
const app = express();
const port = 3002;
app.get ('/', (req,res) => {
setTimeout( () => {
res.send('Hello World')
}, 5000);
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
The Test
As we are using NGINX OSS the Loadbalancing protocol will be RoundRobin (RR). Our first test from another server using ap. The Result:
Concurrency Level: 10
Time taken for tests: 25.056 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 17400 bytes
HTML transferred: 1700 bytes
Requests per second: 3.99 [#/sec] (mean)
Time per request: 2505.585 [ms] (mean)
Time per request: 250.559 [ms] (mean, across all concurrent requests)
Transfer rate: 0.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.7 0 5
Processing: 0 2505 2514.3 5001 5012
Waiting: 0 2504 2514.3 5001 5012
Total: 1 2505 2514.3 5001 5012
Percentage of the requests served within a certain time (ms)
50% 5001
66% 5005
75% 5007
80% 5007
90% 5010
95% 5011
98% 5012
99% 5012
100% 5012 (longest request)
50% of all requests are slow. Thats totally okay because we have one "slow" instance. The same test with curl. Same result. Based on the debug-log of the NGINX server we saw that the request were processed as they came in and were sent to either the slow or the fast backend (based on roundrobin).
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, current: 000055B815BD4540 0
So given that means the behaviour of "nginx blocking request till current request finishes" is not reproducible on the instance. But I was able to reproduce your issue in the Chrome Browser. Hitting the slow instance will let the other browser window waiting till the first one gets its response. After some memory analysis and debugging on the client side I came across the connection pool of the browser.
https://www.chromium.org/developers/design-documents/network-stack
The Browser makes use of the same, already established connection to the Server. In case this connection is occupied with the waiting request (Same data, same cookies...) it will not open a new connection from the pool. It will wait for the first request to finish. You can work around this by adding a cache-buster or a new header, new cookie to the request. something like:
http://10.172.1.120:8080/?ofdfu9aisdhffadf. Send this in a new browser window while you are waiting in the other one for the response. This will show an immediate response (given there was no other request to the backend because based on RR -> IF there was a request to the slow one the next one will be the fast one).
Same applies if you send request from different clients. This will work as well.

Related

Nginx: log the actual forwarded proxy_pass request URI to upstream

I've got the following nginx conf:
http {
log_format upstream_logging '[proxied request] '
'$server_name$request_uri -> $upstream_addr';
access_log /dev/stdout upstream_logging;
server {
listen 80;
server_name localhost;
location ~ /test/(.*)/foo {
proxy_pass http://127.0.0.1:3000/$1;
}
}
}
When I hit:
http://localhost/test/bar/foo
My actual output is:
[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000
While my expected output is:
[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000/bar
Is there a variable or a way to produce the actual proxied URI in the log?
If not production, you can test what is being sent by nginx after launching the simplest listening server on the desired local address and port (instead of a real one):
$ nc -l 127.0.0.1 3000
POST /some/uri HTTP/1.0
Host: 127.0.0.1
Connection: close
Content-Length: 14
some payload
Response can be simulated by manually entering HTTP/1.1 200 OK, followed with 2 new lines, while nc is running.

Firebase-Hosted Cloud Function retrying on any request that takes 60s, even when timeout is >60s

I have a vanilla cloud function that takes 60 seconds and then returns status 200 with a simple JSON object. The timeout for the function is set to 150s. When testing locally, and when running the function via it's cloudfunctions.net address, the function completes at 60s and the 200 response and body are correctly delivered to the client. So far so good.
Here's the kicker -- If I run the exact same function proxied through firebase hosting (setup via a "target" inside firebase.json), according to the stackdriver logs, the function is instantaneously restarted anywhere from 1-3 times, and when those finish the function sometimes is AGAIN restarted, eventually returning a 503 Timeout from Varnish.
This behavior is ONLY consistently replicable when the function is called on a domain that is proxied through firebase hosting. It seems to ONLY happen when the function takes ~60s or longer. It does not depend on the returned response code or response body.
You can see this behavior in a test function I have setup here: https://trellisconnect.com/testtimeout?sleepAmount=60&retCode=200
This behavior was originally identified in a function that is deployed via serverless. To rule out serverless I created a test function that makes testing and verifying the behavior easy and deployed it with regular firebase functions and called it from it's cloudfunctions.net domain and verified that I always got a correct response at 60s. I then updated my firebase.json to add a new route that points to this function and was able to replicate the problem.
index.js
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
exports.testtimeout = functions.https.onRequest((req, res) => {
const { sleepAmount, retCode } = req.query;
console.log(`starting test sleeping ${sleepAmount}...`);
sleep(1000 * sleepAmount).then(result => {
console.log(`Ending test func, returning ${retCode}`);
return res.status(retCode).json({ message: 'Random Response' });
});
});
firebase.json
{
"hosting": {
"public": "public",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
{
"source": "/testtimeout",
"function": "testtimeout"
}
]
},
"functions": {}
}
</snip>
A correct/expected response (sleepAmount=2 seconds)
zgoldberg#zgblade:~$ time curl "https://trellisconnect.com/testtimeout?sleepAmount=2&retCode=200"
{"message":"Random Response"}
real 0m2.269s
user 0m0.024s
sys 0m0.000s
And a sample of how things appear when sleepAmount is set to 60 seconds
zgoldberg#zgblade:~$ curl -v "https://trellisconnect.com/testtimeout?sleepAmount=60&retCode=200"
* Trying 151.101.65.195...
* TCP_NODELAY set
* Connected to trellisconnect.com (151.101.65.195) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=admin.cliquefood.com.br
* start date: Oct 16 20:44:55 2019 GMT
* expire date: Jan 14 20:44:55 2020 GMT
* subjectAltName: host "trellisconnect.com" matched cert's "trellisconnect.com"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x563f92bdc580)
> GET /testtimeout?sleepAmount=60&retCode=200 HTTP/2
> Host: trellisconnect.com
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< server: Varnish
< retry-after: 0
< content-type: text/html; charset=utf-8
< accept-ranges: bytes
< date: Fri, 08 Nov 2019 03:12:08 GMT
< x-served-by: cache-bur17523-BUR
< x-cache: MISS
< x-cache-hits: 0
< x-timer: S1573182544.115433,VS0,VE184552
< content-length: 449
<
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 first byte timeout</title>
</head>
<body>
<h1>Error 503 first byte timeout</h1>
<p>first byte timeout</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-bur17523-BUR 1573182729 2301023220</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
* Connection #0 to host trellisconnect.com left intact
real 3m3.763s
user 0m0.024s
sys 0m0.031s
Here's the crazy part, checkout the stackdriver logs, notice how the function completes in 60s and almost immediately after 3 more executions are started...
Notice the original call comes in at 19:09:04.235 and ends at 19:10:04.428 -- almost exactly 60s later. Almost exactly 500ms later, 19:10:05.925 the function is restarted. I promise to you I did not hit my curl command again 0.5s after the initial response. None of the subsequent exectutions of the function here were generated by me, they all seem to be phantom retries?
https://i.imgur.com/WDY17pw.png
(edit: I don't have 10 reputation to post the actual image, so just a link above)
Any thoughts or help is much appreciated
From Firebase Hosting: Serving Dynamic Content with Cloud Functions for Firebase:
Note: Firebase Hosting is subject to a 60-second request timeout. Even if you configure your HTTP function with a longer request timeout, you'll still receive an HTTP status code 504 (request timeout) if your function requires more than 60 seconds to run. To support dynamic content that requires longer compute time, consider using an App Engine flexible environment.
In short, unfortunately your use-case is not supported as the CDN/Hosting instance just assumes that the connection was lost and tries again.

How to Remove Data from WC API Endpoint Request?

I'm shipping off order data to a 3rd party piece of fulfillment software. They integrate by default with the WooCommerce REST API. However some recent changes to my site and order data have added additional order meta. Now when grabbing the same amount of orders as it always has the request times out with a 504. The request is now unreasonably large, to fix this I've decided to optimize by reducing the irrelevant and unnecessary data produced by the request. Also I have to be able to process 100 at a time I cannot reduce the filter limit, its automatically set by the 3rd party application.
Endpoint in Question
wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100
This endpoint grabs the first 100 orders in processing and displays them as a piece of JSON.
Things to Remove
customer_user_agent
avatar_url
cogs_cost
cogs_total_cost
Example Response
{
"orders":[
{
"id":137314,
"order_number":"137314",
"created_at":"2019-09-18T18:37:06Z",
"updated_at":"2019-09-18T18:37:07Z",
"completed_at":"1970-01-01T00:00:00Z",
"status":"processing",
"currency":"USD",
"total":"49.50",
"subtotal":"55.00",
"total_line_items_quantity":1,
"total_tax":"0.00",
"total_shipping":"0.00",
"cart_tax":"0.00",
"shipping_tax":"0.00",
"total_discount":"0.00",
"shipping_methods":"Free shipping",
"payment_details":{
"method_id":"nmipay",
"method_title":"Pay with Credit Card",
"paid":true
},
"billing_address":{
"first_name":"XXX",
"last_name":"XXXX",
"company":"",
"address_1":"XXXX",
"address_2":"",
"city":"XXXX",
"state":"XX",
"postcode":"XXXXX",
"country":"US",
"email":"XXXXXX",
"phone":"XXXX"
},
"shipping_address":{
"first_name":"XXX",
"last_name":"XX",
"company":"",
"address_1":"XXXXX",
"address_2":"",
"city":"XXX",
"state":"XXX",
"postcode":"XXX",
"country":"XXXX"
},
"note":"",
"customer_ip":"98.216.25.236",
"customer_user_agent":"mozilla\/5.0 (iphone; cpu iphone os 12_4_1 like mac os x) applewebkit\/605.1.15 (khtml, like gecko) version\/12.1.2 mobile\/15e148 safari\/604.1",
"customer_id":127116,
"view_order_url":"XXXXX",
"line_items":[
{
"id":198261,
"subtotal":"55.00",
"subtotal_tax":"0.00",
"total":"55.00",
"total_tax":"0.00",
"price":"55.00",
"quantity":1,
"tax_class":"",
"name":"Core Hoodie - Black, Large",
"product_id":351,
"sku":"ss-hoodie-core-zip-blk-lg",
"meta":[
],
"bundled_by":"",
"bundled_item_title":"",
"bundled_items":[
],
"cogs_cost":"23.15",
"cogs_total_cost":"23.15"
}
],
"shipping_lines":[
{
"id":198263,
"method_id":"free_shipping",
"method_title":"Free shipping",
"total":"0.00"
}
],
"tax_lines":[
],
"fee_lines":[
{
"id":198262,
"title":"VIP Discount",
"tax_class":"0",
"total":"-5.50",
"total_tax":"0.00"
}
],
"coupon_lines":[
],
"cogs_total_cost":"23.15"
}
]
}
This is the furthest i've gotten
I found the following hooks but cannot get anything to trigger.
woocommerce_rest_prepare_shop_order_object
woocommerce_rest_prepare_shop_order
function remove_user_agent_from_rest_api( $response, $object, $request ) {
unset($response->data['customer_user_agent']);
return $response;
}
function test_rest_api() {
add_filter( "woocommerce_rest_pre_insert_shop_order", "remove_user_agent_from_rest_api", 10, 2 );
add_filter( "woocommerce_rest_pre_insert_shop_order_object", "remove_user_agent_from_rest_api", 10, 2 );
}
add_action( 'rest_api_init', 'test_rest_api', 0 );
Is this a performance tuning issue?
Here is a sample trace from new relic & a sample from my NGINX Error Log. What could I tune to keep the server open long enough to process this request.
2019/10/02 10:59:25 [error] 10270#10270: *5 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: XXX, server: X.net, request: "GET /?km_source=blog HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "X.net", referrer: "https://www.X.net/"
2019/10/02 11:00:42 [error] 10270#10270: *34 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=10&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
2019/10/02 11:07:53 [error] 13021#13021: *62 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
2019/10/02 11:13:45 [error] 15270#15270: *66 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "XXX.net"
2019/10/02 11:15:44 [error] 16010#16010: *79 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: X.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
The first issue I'd notice it that your filters are only passing 2 variables, when they should be passing 3.
add_filter( "woocommerce_rest_pre_insert_shop_order", "remove_user_agent_from_rest_api", 10, 3 );
Should do it.

Unable to verify https endpoint with pact-jvm-provider-maven_2.11 in pact broker

This is my pom snippet for service providers
<serviceProviders>
<serviceProvider>
<name>StoreSite</name>
<protocol>https</protocol>
<host>https://somesiteurl.com</host>
<path></path>
<consumers>
<consumer>
<name>FrontSite</name>
<pactUrl>http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest</pactUrl>
</consumer>
</consumers>
</serviceProvider>
</serviceProviders>
and after pact:verify operation. I get below build error with stack trace.
I can see the pact file generated in localhost broker. But verification is keeps on failing when the endpoint is changed to https.
[DEBUG] (s) name = StoreSite
[DEBUG] (s) protocol = https
[DEBUG] (s) host = https://somesiteurl.com
[DEBUG] (s) name = FrontSite
[DEBUG] (s) pactUrl = http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest
[DEBUG] (s) consumers = [au.com.dius.pact.provider.maven.Consumer()]
[DEBUG] (f) serviceProviders = [au.com.dius.pact.provider.maven.Provider(null, null, null, null)]
[DEBUG] -- end configuration --
Verifying a pact between FrontSite and StoreSite
[from URL http://[::1]:8080/pacts/provider/StoreSite/consumer/FrontSite/latest]
Valid sign up request
[DEBUG] Verifying via request/response
[DEBUG] Making request for provider au.com.dius.pact.provider.maven.Provider(null, null, null, null):
[DEBUG] method: POST
path: /api/v1/customers
headers: [Content-Type:application/json, User-Agent:Mozilla/5.0
matchers: [:]
body: au.com.dius.pact.model.OptionalBody(PRESENT, {"dob":"1969-12-17","pwd":"255577_G04QU","userId":"965839_R9G3O"})
Request Failed - https
Failures:
0) Verifying a pact between FrontSite and StoreSite - Valid sign up request
https
I tried to verify against a service called BusService that runs on https and got it to work like this. My example is not set up the same ways as yours, but I believe the important differences are the addition of the tag <insecure>true</insecure> and that did only use the server name in host-tag <host>localhost</host>.
<serviceProvider>
<name>BusService</name>
<protocol>https</protocol>
<insecure>true</insecure>
<host>localhost</host>
<port>8443</port>
<path>/</path>
<pactBrokerUrl>http://localhost:8113/</pactBrokerUrl>
</serviceProvider>

Does Nginx respect the weight attribute with consistent hashing?

More specific, will this work ?
upstream backend {
hash $request_uri consistent;
server backend1.example.com weight=1;
server backend2.example.com weight=2;
}
will backend2.example.com receive twice as much traffic ?
Also, what happens if a weight is changed or another server is added to the mix. Will the "only few keys will be remapped" still hold ?
The optional consistent parameter of the hash directive enables ketama consistent hash load balancing. Requests will be evenly distributed across all upstream servers based on the user-defined hashed key value. If an upstream server is added to or removed from an upstream group, only few keys will be remapped which will minimize cache misses in case of load balancing cache servers and other applications that accumulate state.
from https://www.nginx.com/resources/admin-guide/load-balancer/
In this configuration, consistent hash is more important than weight.
In other words, if one upstream presents both weight and a consistent hash, then the main thing will be a consistent hash.
And hashes are distributed to the servers according to the weight.
upstream consistent_test {
server consistent_test.example.ru:80 weight=90;
server consistent_test2.example.ru:80 weight=10;
hash $arg_consistent consistent;
}
Experiment
1) Default state
upstream balancer_test {
hash $arg_someid consistent;
server server1.example.ru:8080;
server server2.example.ru:8080;
server server3.example.ru:8080 down;
}
Request hashes pined to hosts:
server1.example.ru ==> 535
server2.example.ru ==> 462
server3.example.ru ==> 0
2) First step: enable the node and set the weight
upstream balancer_test {
hash $api_sessionid consistent;
server server1.example.ru:8080 weight=250;
server server2.example.ru:8080 weight=500;
server server3.example.ru:8080 weight=250;
}
Request hashes pined to hosts:
server1.example.ru:8080 ==> 263
server2.example.ru:8080 ==> 473
server3.example.ru:8080 ==> 254
3) The second step: Finish the translation of traffic and disable the old node
upstream balancer_test {
hash $api_sessionid consistent;
server1.example.ru:8080 down;
server2.example.ru:8080;
server3.example.ru:8080;
}
Request hashes pined to hosts:
server1.example.ru:8080 ==> 0
server2.example.ru:8080 ==> 533
server3.example.ru:8080 ==> 464
server1.example.ru:
1) before = 463
2) on step_2 = 533
3) hash hits = 306
server2.example.ru:
1) before = 536
2) on step_1 = 263
3) hash hits = 148
server3.example.ru:
1) before = 255
2) on step 1 = 464
3) hash hits = 115

Resources