gRPC keepalive ping fail after the second time - grpc

I've tested grpc keepalive using official c++ helloworld sample.
However keepalive ping fail after the second time.
Client says Keepalive watchdog fired. Closing transport.
https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_client.cc
https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_server.cc
I've modified client and server code as follows:
// client main()
// ....
// keepalive configuration
grpc::ChannelArguments args;
args.SetInt(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
args.SetInt(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
args.SetInt(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 3000);
args.SetInt(GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA, 0);
GreeterClient greeter(grpc::CreateCustomChannel(
target_str, grpc::InsecureChannelCredentials(), args));
std::string user("world");
// send rpc request at 1 minute interval
while (true) {
std::string reply = greeter.SayHello(user);
std::cout << "Greeter received: " << reply << std::endl;
std::this_thread::sleep_for(std::chrono::milliseconds(60 * 1000));
}
// server main()
// ....
// keepalive configuration
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA, 0);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 10000);
// ....
After build, run in terminal:
$ export GRPC_VERBOSITY=debug
$ export GRPC_TRACE=http_keepalive,connectivity_state,http
$ ./greeter_server
$ ./greeter_client # in other terminal
client output:
...
I0115 17:09:27.910828000 123145533800448 chttp2_transport.cc:2821] ipv6:[::1]:50051: Start keepalive ping
I0115 17:09:27.912454000 123145534337024 chttp2_transport.cc:2845] ipv6:[::1]:50051: Finish keepalive ping
I0115 17:09:37.916751000 123145533800448 chttp2_transport.cc:806] W:0x7ffc2c010200 CLIENT [ipv6:[::1]:50051] state IDLE -> WRITING [KEEPALIVE_PING]
I0115 17:09:37.916792000 123145533800448 writing.cc:129] CLIENT: Ping sent [ipv6:[::1]:50051]: 0/0
I0115 17:09:37.916800000 123145533800448 chttp2_transport.cc:806] W:0x7ffc2c010200 CLIENT [ipv6:[::1]:50051] state WRITING -> WRITING [begin write in current thread]
I0115 17:09:37.916844000 123145533800448 chttp2_transport.cc:806] W:0x7ffc2c010200 CLIENT [ipv6:[::1]:50051] state WRITING -> IDLE [finish writing]
I0115 17:09:37.916851000 123145533800448 chttp2_transport.cc:2821] ipv6:[::1]:50051: Start keepalive ping
I0115 17:09:40.921260000 123145534337024 chttp2_transport.cc:2883] ipv6:[::1]:50051: Keepalive watchdog fired. Closing transport.
I0115 17:09:40.921305000 123145534337024 chttp2_transport.cc:2912] transport 0x7ffc2c010200 set connectivity_state=4
I0115 17:09:40.921316000 123145534337024 connectivity_state.cc:159] ConnectivityStateTracker client_transport[0x7ffc2c0104a0]: READY -> SHUTDOWN (close_transport, OK)
I0115 17:09:40.921325000 123145534337024 connectivity_state.cc:167] ConnectivityStateTracker client_transport[0x7ffc2c0104a0]: notifying watcher 0x7ffc2b606490: READY -> SHUTDOWN
I0115 17:09:40.921521000 123145534337024 connectivity_state.cc:79] watcher 0x7ffc2b606490: delivering async notification for SHUTDOWN (OK)
I0115 17:09:40.921595000 123145534337024 connectivity_state.cc:159] ConnectivityStateTracker client_channel[0x7ffc2b607958]: READY -> IDLE (helper, OK)
server output:
...
I0115 17:09:27.910972000 123145419530240 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state IDLE -> WRITING [PING_RESPONSE]
I0115 17:09:27.911006000 123145419530240 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state WRITING -> WRITING [begin write in current thread]
I0115 17:09:27.911030000 123145419530240 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state WRITING -> IDLE [finish writing]
I0115 17:09:37.916957000 123145420066816 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state IDLE -> WRITING [PING_RESPONSE]
I0115 17:09:37.916989000 123145420066816 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state WRITING -> WRITING [begin write in current thread]
I0115 17:09:37.917010000 123145420066816 chttp2_transport.cc:806] W:0x7ffb1a809000 SERVER [ipv6:[::1]:52295] state WRITING -> IDLE [finish writing]
I0115 17:09:40.921613000 123145419530240 chttp2_transport.cc:2912] transport 0x7ffb1a809000 set connectivity_state=4
I0115 17:09:40.921664000 123145419530240 connectivity_state.cc:159] ConnectivityStateTracker server_transport[0x7ffb1a8092a0]: READY -> SHUTDOWN (close_transport, OK)
I0115 17:09:40.921674000 123145419530240 connectivity_state.cc:167] ConnectivityStateTracker server_transport[0x7ffb1a8092a0]: notifying watcher 0x7ffb19905040: READY -> SHUTDOWN
I0115 17:09:40.921743000 123145419530240 connectivity_state.cc:79] watcher 0x7ffb19905040: delivering async notification for SHUTDOWN (OK)
I0115 17:09:40.921758000 123145419530240 chttp2_transport.cc:1837] perform_transport_op[t=0x7ffb1a809000]: SET_ACCEPT_STREAM:(nil)((nil),...)
environment:
grpc 1.42.0
macOS(intel) 11.6
gRPC C++ keepalive document:
https://grpc.github.io/grpc/cpp/md_doc_keepalive.html

Related

Which directive I can run before ssl_certificate_by_lua_block to get user-agent information in openresty

I am using OpenResty to generate SSL certificates dynamically.
I am trying to find out the user-agent of request before running ssl_certificate_by_lua_block and decide If I want to continue with the request or not.
I found out that ssl_client_hello_by_lua_block directive runs before ssl_certificate_by_lua_block but if I try to execute ngx.req.get_headers()["user-agent"] inside ssl_client_hello_by_lua_block I get the following error
2022/06/13 09:20:58 [error] 31918#31918: *18 lua entry thread aborted: runtime error: ssl_client_hello_by_lua:6: API disabled in the current context
stack traceback:
coroutine 0:
[C]: in function 'error'
/usr/local/openresty/lualib/resty/core/request.lua:140: in function 'get_headers'
ssl_client_hello_by_lua:6: in main chunk, context: ssl_client_hello_by_lua*, client: 1.2.3.4, server: 0.0.0.0:443
I tried rewrite_by_lua_block but it runs after ssl_certificate_by_lua_block
Are there any directive that can let me access ngx.req.get_headers()["user-agent"] and run before ssl_certificate_by_lua_block as well?
My Nginx conf for reference.
nginx.conf
# HTTPS server
server {
listen 443 ssl;
rewrite_by_lua_block {
local user_agent = ngx.req.get_headers()["user-agent"]
ngx.log(ngx.ERR, "rewrite_by_lua_block user_agent -- > ", user_agent)
}
ssl_client_hello_by_lua_block {
ngx.log(ngx.ERR, "I am from ssl_client_hello_by_lua_block")
local ssl_clt = require "ngx.ssl.clienthello"
local host, err = ssl_clt.get_client_hello_server_name()
ngx.log(ngx.ERR, "hosts -- > ", host)
-- local user_agent = ngx.req.get_headers()["user-agent"]
-- ngx.log(ngx.ERR, "user_agent -- > ", user_agent)
}
ssl_certificate_by_lua_block {
auto_ssl:ssl_certificate()
}
ssl_certificate /etc/ssl/resty-auto-ssl-fallback.crt;
ssl_certificate_key /etc/ssl/resty-auto-ssl-fallback.key;
location / {
proxy_pass http://backend_proxy$request_uri;
}
}
If someone is facing the same issue.
Here is the email group of OpenResty that helped me.
I was not thinking correctly. The certificate negotiation happens before a client send user-agent data(that comes in after the SYNACK reaches the client). So you cant save issuing the certificate in the process. Hard luck.
Once the handshake and the Client/Server Hello happens then the server has the user-agent, you can do the blocking under access_by_lua_block.

Redirect gRPC traffic using nginx from HTTPS to HTTP

I am planning to redirect HTTPS and HTTP gRPC traffic using nginx for a special use case. I am being able to recreate the problem using a hello world example. The main documentation I have used are [Introducing gRPC Support with NGINX 1.13.10][1] and [Nginx as Reverse Proxy with GRPC][2].
Firstly, I created certificate files for the ssl connection using
openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 365 -out server.crt -subj '/CN=localhost'
When I follow the article, I am being able to successfully route traffic from a secure grpc client to a secure grpc server. However, my use case needs to forward traffic from a secure nginx port to an insecure grpc server. The client, nginx.conf and server code attached below.
nginx.conf (Needs to reroute traffic to an insecure port)
upstream dev {
server localhost:1338;
}
server {
listen 1449 ssl http2;
ssl_certificate /ssl/server.crt; #Enter you certificate location
ssl_certificate_key /ssl/server.key;
location /helloworld.Greeter {
grpc_pass grpcs://dev;
}
}
client.py (Includes ssl certificate to hit nginx secure endpoint)
from __future__ import print_function
import logging
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
def run():
# NOTE(gRPC Python Team): .close() is possible on a channel and should be
# used in circumstances in which the with statement does not fit the needs
# of the code.
host = 'localhost'
port = 1449
with open('/home/ubuntu/Documents/ludex_repos/nginx-grpc/server.crt', 'rb') as f:
trusted_certs = f.read()
credentials = grpc.ssl_channel_credentials(root_certificates=trusted_certs)
with grpc.secure_channel(f'{host}:{port}', credentials) as channel:
stub = helloworld_pb2_grpc.GreeterStub(channel)
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
print(f"========================Greeter client received: {response.message}===============================")
if __name__ == '__main__':
logging.basicConfig()
run()
server.py (Has insecure port)
from concurrent import futures
import time
import logging
import grpc
import helloworld_pb2
import helloworld_pb2_grpc
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
class Greeter(helloworld_pb2_grpc.GreeterServicer):
def SayHello(self, request, context):
return helloworld_pb2.HelloReply(message='Hello, %s!' % request.name)
def serve():
port = '1338'
with open('/ssl/server.key', 'rb') as f:
private_key = f.read()
with open('/ssl/server.crt', 'rb') as f:
certificate_chain = f.read()
server_credentials = grpc.ssl_server_credentials(((private_key, certificate_chain,),))
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
helloworld_pb2_grpc.add_GreeterServicer_to_server(Greeter(), server)
**If I change this to a secure port then it routes traffic correctly via nginx**
#server.add_secure_port('[::]:'+port, server_credentials)
server.add_insecure_port('[::]:'+port)
print("Server Started...")
server.start()
try:
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
logging.basicConfig()
serve()
Secure to secure response
========================Greeter client received: Hello, you!===============================
Secure to insecure response
Traceback (most recent call last):
File "greeter_client.py", line 45, in <module>
run()
File "greeter_client.py", line 39, in run
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
File "/home/ubuntu/anaconda3/envs/fp/lib/python3.8/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/ubuntu/anaconda3/envs/fp/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Received http2 header with status: 502"
debug_error_string = "{"created":"#1641485952.541123035","description":"Received http2 :status header with non-200 OK status","file":"src/core/ext/filters/http/client/http_client_filter.cc","file_line":132,"grpc_message":"Received http2 header with status: 502","grpc_status":14,"value":"502"}"
>
I understand a reverse proxy is possible and I've seen examples forwarding traffic from https to http using webpages but I'm not sure if it is possible to do it with gRPC traffic?
[1]: https://www.nginx.com/blog/nginx-1-13-10-grpc/
[2]: https://medium.com/nirman-tech-blog/nginx-as-reverse-proxy-with-grpc-820d35642bff
Try using grpc_pass grpc://... instead of grpcs://...
This updated blog post might help: https://www.nginx.com/blog/deploying-nginx-plus-as-an-api-gateway-part-3-publishing-grpc-services/

nginx blocking request till current request finishes

Boiling my question down to the simplest possible: I have a simple Flask webserver that has a GET handler like this:
#app.route('/', methods=['GET'])
def get_handler():
t = os.environ.get("SLOW_APP")
app_type = "Fast"
if t == "1":
app_type = "Slow"
time.sleep(20)
return "Hello from Flask, app type = %s" % app_type
I am running this app on two different ports: one without the SLOW_APP environment variable set on port 8000 and the other with the SLOW_APP environment variable set on port 8001.
Next I have an nginx reverse proxy that has these two appserver instances in its upstream. I am running everything using docker so my nginx conf looks like this:
upstream myproject {
server host.docker.internal:8000;
server host.docker.internal:8001;
}
server {
listen 8888;
#server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
It works except that if I open two browser windows and type localhost, it first hits the slow server where it takes 20 seconds and during this time the second browser appears to block waiting for the first request to finish. Eventually I see that the first request was serviced by the "slow" server and the second by the "fast" server (no time.sleep()). Why does nginx appear to block the second request till the first one finishes?
No, if the first request goes to the slow server (where it takes 20 sec) and during that delay if I make a request again from the browser it goes to the second server but only after the first is finished.
I have worked with our Engineering Team on this and can share the following insights:
Our Lab environment
Lua
load_module modules/ngx_http_lua_module-debug.so;
...
upstream app {
server 127.0.0.1:1234;
server 127.0.0.1:2345;
}
server {
listen 1234;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by fast")
ngx.say("accepted by fast")
}
}
}
server {
listen 2345;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by slow")
ngx.say("accepted by slow")
ngx.sleep(5);
}
}
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
This is the same setup as it would be with another 3rd party application we are proxying traffic to. But I have tested the same with an NGINX configuration shared in your question and two NodeJS based applications as upstream.
NodeJS
Normal
const express = require('express');
const app = express();
const port = 3001;
app.get ('/', (req,res) => {
res.send('Hello World')
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
Slow
const express = require('express');
const app = express();
const port = 3002;
app.get ('/', (req,res) => {
setTimeout( () => {
res.send('Hello World')
}, 5000);
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
The Test
As we are using NGINX OSS the Loadbalancing protocol will be RoundRobin (RR). Our first test from another server using ap. The Result:
Concurrency Level: 10
Time taken for tests: 25.056 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 17400 bytes
HTML transferred: 1700 bytes
Requests per second: 3.99 [#/sec] (mean)
Time per request: 2505.585 [ms] (mean)
Time per request: 250.559 [ms] (mean, across all concurrent requests)
Transfer rate: 0.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.7 0 5
Processing: 0 2505 2514.3 5001 5012
Waiting: 0 2504 2514.3 5001 5012
Total: 1 2505 2514.3 5001 5012
Percentage of the requests served within a certain time (ms)
50% 5001
66% 5005
75% 5007
80% 5007
90% 5010
95% 5011
98% 5012
99% 5012
100% 5012 (longest request)
50% of all requests are slow. Thats totally okay because we have one "slow" instance. The same test with curl. Same result. Based on the debug-log of the NGINX server we saw that the request were processed as they came in and were sent to either the slow or the fast backend (based on roundrobin).
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, current: 000055B815BD4540 0
So given that means the behaviour of "nginx blocking request till current request finishes" is not reproducible on the instance. But I was able to reproduce your issue in the Chrome Browser. Hitting the slow instance will let the other browser window waiting till the first one gets its response. After some memory analysis and debugging on the client side I came across the connection pool of the browser.
https://www.chromium.org/developers/design-documents/network-stack
The Browser makes use of the same, already established connection to the Server. In case this connection is occupied with the waiting request (Same data, same cookies...) it will not open a new connection from the pool. It will wait for the first request to finish. You can work around this by adding a cache-buster or a new header, new cookie to the request. something like:
http://10.172.1.120:8080/?ofdfu9aisdhffadf. Send this in a new browser window while you are waiting in the other one for the response. This will show an immediate response (given there was no other request to the backend because based on RR -> IF there was a request to the slow one the next one will be the fast one).
Same applies if you send request from different clients. This will work as well.

load balance UDP syslog with custom health check

I setup Nginx Plus to load-balance UDP syslog traffic. Here's a snippet from nginx.conf:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check udp;
}
}
I was a little surprised to hear that NGINX Plus could perform health checks on UDP since UDP is, by design, unreliable. Since there is no acknowledgment in UDP, the messages effectively go into a black hole.
I'm trying to set up a somewhat fault-tolerant and scalable syslog ingestion pipeline. The loss of a node should be detected, by a health check, and be temporarily removed from the list of available servers.
This didn't work, despite the UDP health check. I think the UDP health check only works for services that respond to the caller (e.g. DNS). Since syslog doesn't respond, there's no way to check for errors, e.g. using match.
The process that's ingesting the syslog messages listens on port 1514 and has a REST interface on port 8073:
If the ingest process is healthy a GET request to /connectors/syslog/status on port 8073 returns:
{
"name": "syslog",
"connector": {
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
},
"tasks": [
{
"id": 0,
"state": "RUNNING",
"worker_id": "10.0.1.41:8073"
}
],
"type": "source"
}
I'd like to create a custom check to see that ingest is running. Is that possible with NGINX Plus? Can we check the health on a completely different port?
This is what I did:
stream {
upstream syslog_standard {
zone syslog_zone 64k;
server cp01.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp02.woolford.io:1514 max_fails=1 fail_timeout=10s;
server cp03.woolford.io:1514 max_fails=1 fail_timeout=10s;
}
match syslog_ingest_test {
send "GET /connectors/syslog/status HTTP/1.0\r\nHost: localhost\r\n\r\n";
expect ~* "RUNNING";
}
server {
listen 514 udp;
proxy_pass syslog_standard;
proxy_bind $remote_addr transparent;
health_check match=syslog_ingest_test port=8073;
}
}
The match=syslog_ingest_test health check performs a GET request to the URL at port 8073 (i.e. the port that contains the healthcheck endpoint of the ingest process) and confirms that it's running.
I can toggle the service off/on and NGINX detects it and reacts accordingly.

Unable to verify https endpoint with pact-jvm-provider-maven_2.11 in pact broker

This is my pom snippet for service providers
<serviceProviders>
<serviceProvider>
<name>StoreSite</name>
<protocol>https</protocol>
<host>https://somesiteurl.com</host>
<path></path>
<consumers>
<consumer>
<name>FrontSite</name>
<pactUrl>http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest</pactUrl>
</consumer>
</consumers>
</serviceProvider>
</serviceProviders>
and after pact:verify operation. I get below build error with stack trace.
I can see the pact file generated in localhost broker. But verification is keeps on failing when the endpoint is changed to https.
[DEBUG] (s) name = StoreSite
[DEBUG] (s) protocol = https
[DEBUG] (s) host = https://somesiteurl.com
[DEBUG] (s) name = FrontSite
[DEBUG] (s) pactUrl = http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest
[DEBUG] (s) consumers = [au.com.dius.pact.provider.maven.Consumer()]
[DEBUG] (f) serviceProviders = [au.com.dius.pact.provider.maven.Provider(null, null, null, null)]
[DEBUG] -- end configuration --
Verifying a pact between FrontSite and StoreSite
[from URL http://[::1]:8080/pacts/provider/StoreSite/consumer/FrontSite/latest]
Valid sign up request
[DEBUG] Verifying via request/response
[DEBUG] Making request for provider au.com.dius.pact.provider.maven.Provider(null, null, null, null):
[DEBUG] method: POST
path: /api/v1/customers
headers: [Content-Type:application/json, User-Agent:Mozilla/5.0
matchers: [:]
body: au.com.dius.pact.model.OptionalBody(PRESENT, {"dob":"1969-12-17","pwd":"255577_G04QU","userId":"965839_R9G3O"})
Request Failed - https
Failures:
0) Verifying a pact between FrontSite and StoreSite - Valid sign up request
https
I tried to verify against a service called BusService that runs on https and got it to work like this. My example is not set up the same ways as yours, but I believe the important differences are the addition of the tag <insecure>true</insecure> and that did only use the server name in host-tag <host>localhost</host>.
<serviceProvider>
<name>BusService</name>
<protocol>https</protocol>
<insecure>true</insecure>
<host>localhost</host>
<port>8443</port>
<path>/</path>
<pactBrokerUrl>http://localhost:8113/</pactBrokerUrl>
</serviceProvider>

Resources