Can I create a PACT to run on a different hostname? I have been using pact rule and keeping the hostname as localhost. But now I'm trying to create a pact for an application that can not run on localhost.
#Rule
public PactProviderRule provider = new PactProviderRule("ServiceNowClientRestClientProvider", "localhost", 8080, this);
Is it possible to change localhost to something else, if so are there additional configurations that I need. I've tried changing tests that work on localhost to the actual hostname that the code is using but then it fails and I get a various error message "Unresolved address" or "Cannot assign requested address: bind", or "Address in use"
Ronald Holshausen responded with a good answer to my question. Full conversation is on Pact Google forum post:
The hostname is passed through to the HTTP server library to start an HTTP server to be the mock server. This server will be running on the same machine as the test (in fact will also be the same JVM process). The HTTP server library will use the hostname to resolve to an IP address, which will in turn resolve to a network interface on the machine which the port for the server will be bound to.
It is not as simple as a yes/no answer. It is possible to do (there are standalone mock servers you can run on another machine), but the PactProviderRule always starts a mock server on the same host as where the tests are running.
To achieve what you require, you would need to use one of the mock server implementations, and a new JUnit Rule would need to implemented (preferably extended from PactProviderRule).
There are a number of standalone pact mock servers:
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-server
https://github.com/bethesque/pact-mock_service
https://github.com/pact-foundation/pact-reference/tree/master/rust/pact_mock_server_cli
The only valid values that can be used are: the hostname of the machine where the test is running, the IP address of the machine where the test is running, localhost, 127.0.0.1 or 0.0.0.0
If a standalone mock server is started on another machine (say from your example Hostname: test.service-now.com and Port: 80), then the PactProviderRule will need to know that it should not try start a new mock server but communicate with the one is has been provided with (via the address https://test.service-now.com).
You can in the ruby version using pact-provider-proxy. However, the best use case for consumer driven contracts is when you have development control over both the consumer and the provider, and this generally means that you can stand up an instance of the provider locally. If you are trying to test a public API, or an API you don't have development control over, pact may not be the best tool for you. You can read more here about what pact is not good for.
It is possible to do (there are standalone mock servers you can run on another machine), but the PactProviderRule always starts a mock server on the same host as where the tests are running.
To achieve what you require, you would need to use one of the mock server implementations, and a new JUnit Rule would need to implemented (preferably extended from PactProviderRule).
There are a number of standalone pact mock servers:
https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-server
https://github.com/pact-foundation/pact-reference/tree/master/rust/pact_mock_server_cli
as well as the pact-mock_service from the Ruby implementation (I can't post the link due to reputation restrictions on stack overflow).
Related
I'm currently working on a project where we are using Google Cloud. Within the Cloud we are using CloudRun to provide our services. One of these services is rather complex and has many different configuration options. To validate how these configurations affect the quality of the results and also to evaluate the quality of changes to the service, I would like to proceed as follows:
in addition to the existing service I deploy another instance of the service which contains the changes
I mirror all incoming requests and let both services process them, only the responses from the initial service are returned, but the responses from both services are stored
This allows me to create a detailed evaluation of the differences between the two services without having to provide the user with potentially worse responses.
For the implementation I have setup a NGINX which mirrors the requests. This is also deployed as a CloudRun service. This now accepts all requests and takes care of the authentication. The original service and the mirrored version have been configured in such a way that they can only be accessed internally and should therefore be accessed via a VPC network.
I have tried all possible combinations for the configuration of these parts but I always get 403 or 502 errors.
I have tried setting the NGINX service to the HTTP and HTTPS routes from the service, and I have tried all the VPC Connector settings. When I set the ingress from the service to ALL it works perfectly if I configure the service with HTTPS and port 443 in NGINX. As soon as I set the ingress to Internal I get errors with HTTPS -> 403 and with HTTP -> 502.
Does anyone have experience in this regard and can give me tips on how to solve this problem? Would be very grateful for any help.
If your Cloud Run service are internally accessible (ingress control set to internal only), you need to perform your request from your VPC.
Therefore, as you perfectly did, you plugged a serverless VPC connector on your NGINX service.
The set up is correct. Now, why it works when you route ALL the egress traffic and not only the private traffic to your VPC connector?
In fact, Cloud Run is a public resource, with a public URL, and even if you set the ingress to internal. This param say "the traffic must come to the VPC" and not say "I'm plugged to the VPC with a private IP".
So, to go to your VPC and access a public ressource (Your cloud run services), you need to route ALL the traffic to your VPC, even the public one.
I'm testing gRPC with .NetCore and looked up for a GUI tool or something that can help me to test my endpoint like testing REST API.
I found a proxy tool: grpc-json-proxy that can be used with Postman tool (also found another GUI tool: grpcox).
Using any tool gives an error like the following when trying to connect to the endpoint:
unable to do request err=[Post
http://localhost:5001/greet.Greeter/SayHello: dial tcp 127.0.0.1:5001:
connect: connection refused]
Any idea what could be the issue?
Most importantly, are you confident the gRPC server is listening on localhost:50051? You may confirm this (on Linux) using:
GRPC="50051"
ss --tcp --listening --processes "sport = :${GRPC}"
NOTE you may need to sudo ss ... to get the process
Or more simply:
telnet localhost 50051
If you get Connected to... that's a good sign
Then, if you're using either of these tools through docker, you'll need to ensure the container can access the host's 50051 port. To do this, run the container use --net=host. This will make the host's port available to the container.
I use grpCurl
I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)
I have the following situation.
The webapp in my company is deployed to several environments before reaching live. Every testing environment is called qa-X and has a different IP Address. What I would like to do is to specify in the jenkins job "test app in qa-x" the app's IP for the x environment so that my tests start running only knowing the apps url.
Jenkins itself is outside the qa-x environments.
I have been looking around for solutions but all of them destroy the other tests of qa-X. For instance, changing /etc/hosts, or changing the dns server. What would be great is that I can specify in that job only the ip as a config parameter and that that definition remains local.
Any thoughts/ideas?
If I'm understanding your query correctly, you should look into creating a Parameterized build which would expose an environment variable with the desired server IP, which your test script could consume.
i've been searching and trying for weeks now to find a solution to my issue that I can understand and easily implement but I had no joy. So i would be very grateful if someone could put me out of my misery.
I'm building an iphone app similar in functionality to apps like "Air Video" and "Air Playit". The app should communicate with a server running on a remote host. This server should be able to execute a command sent by the iphone to encode a video and stream it over http.
In my case, my iphone app sends commands to be executed on a remote host. the remote host is running a python socket server listening for example on port 3333.
On the iphone, i'm simply using
"CFStreamCreatePairWithSocketToHost", "CFWriteStreamOpen" and
"CFReadStreamOpen"
to connect, write and read data.
My remote host, successfully intercepts the commands and starts the encoding.
To serve the contents, I'm having to run a separate http server (i'm using Python simpleHTTPServer) which is listening on another port.
What I would like to do is use the same port for both system commands and http requests.
The apps I've mentioned above seem to do it that way and I've noticed they have their own build-in web server.
I'm sure I'm missing something but please bear with me this is my first attempt at building an app.
Encode your system commands into special HTTP requests. Decide which thing to do (execute command or serve the contents) based on HTTP request, not on the incoming port. If you need to use separate http servers (like you told), consider having a layer that receives everything from the devices and dispatches to other servers (or ports) based on the request.