I've got the following nginx conf:
http {
log_format upstream_logging '[proxied request] '
'$server_name$request_uri -> $upstream_addr';
access_log /dev/stdout upstream_logging;
server {
listen 80;
server_name localhost;
location ~ /test/(.*)/foo {
proxy_pass http://127.0.0.1:3000/$1;
}
}
}
When I hit:
http://localhost/test/bar/foo
My actual output is:
[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000
While my expected output is:
[proxied request] localhost/test/bar/foo -> 127.0.0.1:3000/bar
Is there a variable or a way to produce the actual proxied URI in the log?
If not production, you can test what is being sent by nginx after launching the simplest listening server on the desired local address and port (instead of a real one):
$ nc -l 127.0.0.1 3000
POST /some/uri HTTP/1.0
Host: 127.0.0.1
Connection: close
Content-Length: 14
some payload
Response can be simulated by manually entering HTTP/1.1 200 OK, followed with 2 new lines, while nc is running.
I have a vanilla cloud function that takes 60 seconds and then returns status 200 with a simple JSON object. The timeout for the function is set to 150s. When testing locally, and when running the function via it's cloudfunctions.net address, the function completes at 60s and the 200 response and body are correctly delivered to the client. So far so good.
Here's the kicker -- If I run the exact same function proxied through firebase hosting (setup via a "target" inside firebase.json), according to the stackdriver logs, the function is instantaneously restarted anywhere from 1-3 times, and when those finish the function sometimes is AGAIN restarted, eventually returning a 503 Timeout from Varnish.
This behavior is ONLY consistently replicable when the function is called on a domain that is proxied through firebase hosting. It seems to ONLY happen when the function takes ~60s or longer. It does not depend on the returned response code or response body.
You can see this behavior in a test function I have setup here: https://trellisconnect.com/testtimeout?sleepAmount=60&retCode=200
This behavior was originally identified in a function that is deployed via serverless. To rule out serverless I created a test function that makes testing and verifying the behavior easy and deployed it with regular firebase functions and called it from it's cloudfunctions.net domain and verified that I always got a correct response at 60s. I then updated my firebase.json to add a new route that points to this function and was able to replicate the problem.
index.js
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
exports.testtimeout = functions.https.onRequest((req, res) => {
const { sleepAmount, retCode } = req.query;
console.log(`starting test sleeping ${sleepAmount}...`);
sleep(1000 * sleepAmount).then(result => {
console.log(`Ending test func, returning ${retCode}`);
return res.status(retCode).json({ message: 'Random Response' });
});
});
firebase.json
{
"hosting": {
"public": "public",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
{
"source": "/testtimeout",
"function": "testtimeout"
}
]
},
"functions": {}
}
</snip>
A correct/expected response (sleepAmount=2 seconds)
zgoldberg#zgblade:~$ time curl "https://trellisconnect.com/testtimeout?sleepAmount=2&retCode=200"
{"message":"Random Response"}
real 0m2.269s
user 0m0.024s
sys 0m0.000s
And a sample of how things appear when sleepAmount is set to 60 seconds
zgoldberg#zgblade:~$ curl -v "https://trellisconnect.com/testtimeout?sleepAmount=60&retCode=200"
* Trying 151.101.65.195...
* TCP_NODELAY set
* Connected to trellisconnect.com (151.101.65.195) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=admin.cliquefood.com.br
* start date: Oct 16 20:44:55 2019 GMT
* expire date: Jan 14 20:44:55 2020 GMT
* subjectAltName: host "trellisconnect.com" matched cert's "trellisconnect.com"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x563f92bdc580)
> GET /testtimeout?sleepAmount=60&retCode=200 HTTP/2
> Host: trellisconnect.com
> User-Agent: curl/7.58.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< server: Varnish
< retry-after: 0
< content-type: text/html; charset=utf-8
< accept-ranges: bytes
< date: Fri, 08 Nov 2019 03:12:08 GMT
< x-served-by: cache-bur17523-BUR
< x-cache: MISS
< x-cache-hits: 0
< x-timer: S1573182544.115433,VS0,VE184552
< content-length: 449
<
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 first byte timeout</title>
</head>
<body>
<h1>Error 503 first byte timeout</h1>
<p>first byte timeout</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-bur17523-BUR 1573182729 2301023220</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
* Connection #0 to host trellisconnect.com left intact
real 3m3.763s
user 0m0.024s
sys 0m0.031s
Here's the crazy part, checkout the stackdriver logs, notice how the function completes in 60s and almost immediately after 3 more executions are started...
Notice the original call comes in at 19:09:04.235 and ends at 19:10:04.428 -- almost exactly 60s later. Almost exactly 500ms later, 19:10:05.925 the function is restarted. I promise to you I did not hit my curl command again 0.5s after the initial response. None of the subsequent exectutions of the function here were generated by me, they all seem to be phantom retries?
https://i.imgur.com/WDY17pw.png
(edit: I don't have 10 reputation to post the actual image, so just a link above)
Any thoughts or help is much appreciated
From Firebase Hosting: Serving Dynamic Content with Cloud Functions for Firebase:
Note: Firebase Hosting is subject to a 60-second request timeout. Even if you configure your HTTP function with a longer request timeout, you'll still receive an HTTP status code 504 (request timeout) if your function requires more than 60 seconds to run. To support dynamic content that requires longer compute time, consider using an App Engine flexible environment.
In short, unfortunately your use-case is not supported as the CDN/Hosting instance just assumes that the connection was lost and tries again.
I'm shipping off order data to a 3rd party piece of fulfillment software. They integrate by default with the WooCommerce REST API. However some recent changes to my site and order data have added additional order meta. Now when grabbing the same amount of orders as it always has the request times out with a 504. The request is now unreasonably large, to fix this I've decided to optimize by reducing the irrelevant and unnecessary data produced by the request. Also I have to be able to process 100 at a time I cannot reduce the filter limit, its automatically set by the 3rd party application.
Endpoint in Question
wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100
This endpoint grabs the first 100 orders in processing and displays them as a piece of JSON.
Things to Remove
customer_user_agent
avatar_url
cogs_cost
cogs_total_cost
Example Response
{
"orders":[
{
"id":137314,
"order_number":"137314",
"created_at":"2019-09-18T18:37:06Z",
"updated_at":"2019-09-18T18:37:07Z",
"completed_at":"1970-01-01T00:00:00Z",
"status":"processing",
"currency":"USD",
"total":"49.50",
"subtotal":"55.00",
"total_line_items_quantity":1,
"total_tax":"0.00",
"total_shipping":"0.00",
"cart_tax":"0.00",
"shipping_tax":"0.00",
"total_discount":"0.00",
"shipping_methods":"Free shipping",
"payment_details":{
"method_id":"nmipay",
"method_title":"Pay with Credit Card",
"paid":true
},
"billing_address":{
"first_name":"XXX",
"last_name":"XXXX",
"company":"",
"address_1":"XXXX",
"address_2":"",
"city":"XXXX",
"state":"XX",
"postcode":"XXXXX",
"country":"US",
"email":"XXXXXX",
"phone":"XXXX"
},
"shipping_address":{
"first_name":"XXX",
"last_name":"XX",
"company":"",
"address_1":"XXXXX",
"address_2":"",
"city":"XXX",
"state":"XXX",
"postcode":"XXX",
"country":"XXXX"
},
"note":"",
"customer_ip":"98.216.25.236",
"customer_user_agent":"mozilla\/5.0 (iphone; cpu iphone os 12_4_1 like mac os x) applewebkit\/605.1.15 (khtml, like gecko) version\/12.1.2 mobile\/15e148 safari\/604.1",
"customer_id":127116,
"view_order_url":"XXXXX",
"line_items":[
{
"id":198261,
"subtotal":"55.00",
"subtotal_tax":"0.00",
"total":"55.00",
"total_tax":"0.00",
"price":"55.00",
"quantity":1,
"tax_class":"",
"name":"Core Hoodie - Black, Large",
"product_id":351,
"sku":"ss-hoodie-core-zip-blk-lg",
"meta":[
],
"bundled_by":"",
"bundled_item_title":"",
"bundled_items":[
],
"cogs_cost":"23.15",
"cogs_total_cost":"23.15"
}
],
"shipping_lines":[
{
"id":198263,
"method_id":"free_shipping",
"method_title":"Free shipping",
"total":"0.00"
}
],
"tax_lines":[
],
"fee_lines":[
{
"id":198262,
"title":"VIP Discount",
"tax_class":"0",
"total":"-5.50",
"total_tax":"0.00"
}
],
"coupon_lines":[
],
"cogs_total_cost":"23.15"
}
]
}
This is the furthest i've gotten
I found the following hooks but cannot get anything to trigger.
woocommerce_rest_prepare_shop_order_object
woocommerce_rest_prepare_shop_order
function remove_user_agent_from_rest_api( $response, $object, $request ) {
unset($response->data['customer_user_agent']);
return $response;
}
function test_rest_api() {
add_filter( "woocommerce_rest_pre_insert_shop_order", "remove_user_agent_from_rest_api", 10, 2 );
add_filter( "woocommerce_rest_pre_insert_shop_order_object", "remove_user_agent_from_rest_api", 10, 2 );
}
add_action( 'rest_api_init', 'test_rest_api', 0 );
Is this a performance tuning issue?
Here is a sample trace from new relic & a sample from my NGINX Error Log. What could I tune to keep the server open long enough to process this request.
2019/10/02 10:59:25 [error] 10270#10270: *5 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: XXX, server: X.net, request: "GET /?km_source=blog HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "X.net", referrer: "https://www.X.net/"
2019/10/02 11:00:42 [error] 10270#10270: *34 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=10&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
2019/10/02 11:07:53 [error] 13021#13021: *62 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
2019/10/02 11:13:45 [error] 15270#15270: *66 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: XXX.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "XXX.net"
2019/10/02 11:15:44 [error] 16010#16010: *79 upstream timed out (110: Connection timed out) while reading response header from upstream, client: XXX, server: X.net, request: "GET /wc-api/v2/orders?status=processing&page=1&filter%5Blimit%5D=100&consumer_key=ck_XXX&consumer_secret=cs_XXX HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "X.net"
The first issue I'd notice it that your filters are only passing 2 variables, when they should be passing 3.
add_filter( "woocommerce_rest_pre_insert_shop_order", "remove_user_agent_from_rest_api", 10, 3 );
Should do it.
This is my pom snippet for service providers
<serviceProviders>
<serviceProvider>
<name>StoreSite</name>
<protocol>https</protocol>
<host>https://somesiteurl.com</host>
<path></path>
<consumers>
<consumer>
<name>FrontSite</name>
<pactUrl>http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest</pactUrl>
</consumer>
</consumers>
</serviceProvider>
</serviceProviders>
and after pact:verify operation. I get below build error with stack trace.
I can see the pact file generated in localhost broker. But verification is keeps on failing when the endpoint is changed to https.
[DEBUG] (s) name = StoreSite
[DEBUG] (s) protocol = https
[DEBUG] (s) host = https://somesiteurl.com
[DEBUG] (s) name = FrontSite
[DEBUG] (s) pactUrl = http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest
[DEBUG] (s) consumers = [au.com.dius.pact.provider.maven.Consumer()]
[DEBUG] (f) serviceProviders = [au.com.dius.pact.provider.maven.Provider(null, null, null, null)]
[DEBUG] -- end configuration --
Verifying a pact between FrontSite and StoreSite
[from URL http://[::1]:8080/pacts/provider/StoreSite/consumer/FrontSite/latest]
Valid sign up request
[DEBUG] Verifying via request/response
[DEBUG] Making request for provider au.com.dius.pact.provider.maven.Provider(null, null, null, null):
[DEBUG] method: POST
path: /api/v1/customers
headers: [Content-Type:application/json, User-Agent:Mozilla/5.0
matchers: [:]
body: au.com.dius.pact.model.OptionalBody(PRESENT, {"dob":"1969-12-17","pwd":"255577_G04QU","userId":"965839_R9G3O"})
Request Failed - https
Failures:
0) Verifying a pact between FrontSite and StoreSite - Valid sign up request
https
I tried to verify against a service called BusService that runs on https and got it to work like this. My example is not set up the same ways as yours, but I believe the important differences are the addition of the tag <insecure>true</insecure> and that did only use the server name in host-tag <host>localhost</host>.
<serviceProvider>
<name>BusService</name>
<protocol>https</protocol>
<insecure>true</insecure>
<host>localhost</host>
<port>8443</port>
<path>/</path>
<pactBrokerUrl>http://localhost:8113/</pactBrokerUrl>
</serviceProvider>
More specific, will this work ?
upstream backend {
hash $request_uri consistent;
server backend1.example.com weight=1;
server backend2.example.com weight=2;
}
will backend2.example.com receive twice as much traffic ?
Also, what happens if a weight is changed or another server is added to the mix. Will the "only few keys will be remapped" still hold ?
The optional consistent parameter of the hash directive enables ketama consistent hash load balancing. Requests will be evenly distributed across all upstream servers based on the user-defined hashed key value. If an upstream server is added to or removed from an upstream group, only few keys will be remapped which will minimize cache misses in case of load balancing cache servers and other applications that accumulate state.
from https://www.nginx.com/resources/admin-guide/load-balancer/
In this configuration, consistent hash is more important than weight.
In other words, if one upstream presents both weight and a consistent hash, then the main thing will be a consistent hash.
And hashes are distributed to the servers according to the weight.
upstream consistent_test {
server consistent_test.example.ru:80 weight=90;
server consistent_test2.example.ru:80 weight=10;
hash $arg_consistent consistent;
}
Experiment
1) Default state
upstream balancer_test {
hash $arg_someid consistent;
server server1.example.ru:8080;
server server2.example.ru:8080;
server server3.example.ru:8080 down;
}
Request hashes pined to hosts:
server1.example.ru ==> 535
server2.example.ru ==> 462
server3.example.ru ==> 0
2) First step: enable the node and set the weight
upstream balancer_test {
hash $api_sessionid consistent;
server server1.example.ru:8080 weight=250;
server server2.example.ru:8080 weight=500;
server server3.example.ru:8080 weight=250;
}
Request hashes pined to hosts:
server1.example.ru:8080 ==> 263
server2.example.ru:8080 ==> 473
server3.example.ru:8080 ==> 254
3) The second step: Finish the translation of traffic and disable the old node
upstream balancer_test {
hash $api_sessionid consistent;
server1.example.ru:8080 down;
server2.example.ru:8080;
server3.example.ru:8080;
}
Request hashes pined to hosts:
server1.example.ru:8080 ==> 0
server2.example.ru:8080 ==> 533
server3.example.ru:8080 ==> 464
server1.example.ru:
1) before = 463
2) on step_2 = 533
3) hash hits = 306
server2.example.ru:
1) before = 536
2) on step_1 = 263
3) hash hits = 148
server3.example.ru:
1) before = 255
2) on step 1 = 464
3) hash hits = 115