Unable to verify https endpoint with pact-jvm-provider-maven_2.11 in pact broker - pact-jvm

This is my pom snippet for service providers
<serviceProviders>
<serviceProvider>
<name>StoreSite</name>
<protocol>https</protocol>
<host>https://somesiteurl.com</host>
<path></path>
<consumers>
<consumer>
<name>FrontSite</name>
<pactUrl>http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest</pactUrl>
</consumer>
</consumers>
</serviceProvider>
</serviceProviders>
and after pact:verify operation. I get below build error with stack trace.
I can see the pact file generated in localhost broker. But verification is keeps on failing when the endpoint is changed to https.
[DEBUG] (s) name = StoreSite
[DEBUG] (s) protocol = https
[DEBUG] (s) host = https://somesiteurl.com
[DEBUG] (s) name = FrontSite
[DEBUG] (s) pactUrl = http://[::1]:8080/pacts/provider/StoreSvc/consumer/SiteSvc/latest
[DEBUG] (s) consumers = [au.com.dius.pact.provider.maven.Consumer()]
[DEBUG] (f) serviceProviders = [au.com.dius.pact.provider.maven.Provider(null, null, null, null)]
[DEBUG] -- end configuration --
Verifying a pact between FrontSite and StoreSite
[from URL http://[::1]:8080/pacts/provider/StoreSite/consumer/FrontSite/latest]
Valid sign up request
[DEBUG] Verifying via request/response
[DEBUG] Making request for provider au.com.dius.pact.provider.maven.Provider(null, null, null, null):
[DEBUG] method: POST
path: /api/v1/customers
headers: [Content-Type:application/json, User-Agent:Mozilla/5.0
matchers: [:]
body: au.com.dius.pact.model.OptionalBody(PRESENT, {"dob":"1969-12-17","pwd":"255577_G04QU","userId":"965839_R9G3O"})
Request Failed - https
Failures:
0) Verifying a pact between FrontSite and StoreSite - Valid sign up request
https

I tried to verify against a service called BusService that runs on https and got it to work like this. My example is not set up the same ways as yours, but I believe the important differences are the addition of the tag <insecure>true</insecure> and that did only use the server name in host-tag <host>localhost</host>.
<serviceProvider>
<name>BusService</name>
<protocol>https</protocol>
<insecure>true</insecure>
<host>localhost</host>
<port>8443</port>
<path>/</path>
<pactBrokerUrl>http://localhost:8113/</pactBrokerUrl>
</serviceProvider>

Related

PactNet - HttpPost Test Fails with 500 internal server error

I am trying to send an HTTP Post request from my test method to my Pactnet mock service. The following is the log generated -
[INFO][pact_mock_server::hyper_server] Received request HTTP Request ( method: POST, path: /api/v1/post-txn, query: None, headers: Some({"host": ["127.0.0.1:62047"], "content-length": ["160"], "content-type": ["application/json; charset=utf-8"]}), body: Present(160 bytes, application/json;charset=utf-8) )
[INFO][pact_matching] comparing to expected HTTP Request ( method: POST, path: /api/v1/post-txn, query: None, headers: Some({"Content-Type": ["application/json; charset=utf-8"]}), body: Present(114 bytes, application/json) )
For me, it looks like the received request and the expected request look the same from the log information. However, the test is unsuccessful with the below exception message -
{StatusCode: 500, ReasonPhrase: 'Internal Server Error', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Access-Control-Allow-Origin: *
x-pact: Request-Mismatch
Date: Thu, 24 Mar 2022 05:16:31 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 648
}}
Could someone help me what is wrong with my received request and expected request, and where there is a mismatch as mentioned in the exception details? I have spent a lot of time debugging, yet I am unable to find what exactly the issue is. Thanks in advance.
If you increase the log level to debug you should be able to compare the JSON body in the request to what's expected. The body looks to be different as I dictated by the number of bytes expected vs actual.
Additionally, there should be an error message to indicate why it failed. If not that could be a bug.
The header content type also looks different on close inspection

nginx blocking request till current request finishes

Boiling my question down to the simplest possible: I have a simple Flask webserver that has a GET handler like this:
#app.route('/', methods=['GET'])
def get_handler():
t = os.environ.get("SLOW_APP")
app_type = "Fast"
if t == "1":
app_type = "Slow"
time.sleep(20)
return "Hello from Flask, app type = %s" % app_type
I am running this app on two different ports: one without the SLOW_APP environment variable set on port 8000 and the other with the SLOW_APP environment variable set on port 8001.
Next I have an nginx reverse proxy that has these two appserver instances in its upstream. I am running everything using docker so my nginx conf looks like this:
upstream myproject {
server host.docker.internal:8000;
server host.docker.internal:8001;
}
server {
listen 8888;
#server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
It works except that if I open two browser windows and type localhost, it first hits the slow server where it takes 20 seconds and during this time the second browser appears to block waiting for the first request to finish. Eventually I see that the first request was serviced by the "slow" server and the second by the "fast" server (no time.sleep()). Why does nginx appear to block the second request till the first one finishes?
No, if the first request goes to the slow server (where it takes 20 sec) and during that delay if I make a request again from the browser it goes to the second server but only after the first is finished.
I have worked with our Engineering Team on this and can share the following insights:
Our Lab environment
Lua
load_module modules/ngx_http_lua_module-debug.so;
...
upstream app {
server 127.0.0.1:1234;
server 127.0.0.1:2345;
}
server {
listen 1234;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by fast")
ngx.say("accepted by fast")
}
}
}
server {
listen 2345;
location / {
content_by_lua_block {
ngx.log(ngx.WARN, "accepted by slow")
ngx.say("accepted by slow")
ngx.sleep(5);
}
}
}
server {
listen 80;
location / {
proxy_pass http://app;
}
}
This is the same setup as it would be with another 3rd party application we are proxying traffic to. But I have tested the same with an NGINX configuration shared in your question and two NodeJS based applications as upstream.
NodeJS
Normal
const express = require('express');
const app = express();
const port = 3001;
app.get ('/', (req,res) => {
res.send('Hello World')
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
Slow
const express = require('express');
const app = express();
const port = 3002;
app.get ('/', (req,res) => {
setTimeout( () => {
res.send('Hello World')
}, 5000);
});
app.listen(port, () => {
console.log(`Example app listening on ${port}`)
})
The Test
As we are using NGINX OSS the Loadbalancing protocol will be RoundRobin (RR). Our first test from another server using ap. The Result:
Concurrency Level: 10
Time taken for tests: 25.056 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 17400 bytes
HTML transferred: 1700 bytes
Requests per second: 3.99 [#/sec] (mean)
Time per request: 2505.585 [ms] (mean)
Time per request: 250.559 [ms] (mean, across all concurrent requests)
Transfer rate: 0.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.7 0 5
Processing: 0 2505 2514.3 5001 5012
Waiting: 0 2504 2514.3 5001 5012
Total: 1 2505 2514.3 5001 5012
Percentage of the requests served within a certain time (ms)
50% 5001
66% 5005
75% 5007
80% 5007
90% 5010
95% 5011
98% 5012
99% 5012
100% 5012 (longest request)
50% of all requests are slow. Thats totally okay because we have one "slow" instance. The same test with curl. Same result. Based on the debug-log of the NGINX server we saw that the request were processed as they came in and were sent to either the slow or the fast backend (based on roundrobin).
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *1 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *4 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *5 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *7 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *10 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *13 get rr peer, current: 000055B815BD4540 0
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *16 get rr peer, current: 000055B815BD4388 -100
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, try: 2
2021/04/08 15:26:18 [debug] 8995#8995: *19 get rr peer, current: 000055B815BD4540 0
So given that means the behaviour of "nginx blocking request till current request finishes" is not reproducible on the instance. But I was able to reproduce your issue in the Chrome Browser. Hitting the slow instance will let the other browser window waiting till the first one gets its response. After some memory analysis and debugging on the client side I came across the connection pool of the browser.
https://www.chromium.org/developers/design-documents/network-stack
The Browser makes use of the same, already established connection to the Server. In case this connection is occupied with the waiting request (Same data, same cookies...) it will not open a new connection from the pool. It will wait for the first request to finish. You can work around this by adding a cache-buster or a new header, new cookie to the request. something like:
http://10.172.1.120:8080/?ofdfu9aisdhffadf. Send this in a new browser window while you are waiting in the other one for the response. This will show an immediate response (given there was no other request to the backend because based on RR -> IF there was a request to the slow one the next one will be the fast one).
Same applies if you send request from different clients. This will work as well.

ProxyKit and Querystring for WebSocket connection

I'm using ProxyKit to act as a reverse proxy. I need to proxy http and websocket (signalR) traffic so I have the below configuration. I also need to send across a querystring, what do I need to add to the base configuration to also include the querystring, I've tried this:
var redirectTo = "https://proxied-server:5002";
var wssredirectTo = "ws://proxied-server:5002";
app.UseWebSockets();
app.UseWebSocketProxy(context => new Uri(wssredirectTo + context.Request.Path.ToString() + context.Request.QueryString.ToString()),
options => options.AddXForwardedHeaders());
app.RunProxy(context =>
{
var finalUrl = redirectTo + context.Request.Path.ToString();
var finalContext = context.ForwardTo(finalUrl);
finalContext.UpstreamRequest.RequestUri = new Uri(finalUrl);
return finalContext
.CopyXForwardedHeaders()
.AddXForwardedHeaders()
.Send();
});
But I still only see the root url in the proxied server kestrel log:
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 POST https://proxied-server:5002/signal/satellite/negotiate text/plain; charset=UTF-8 0
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 2.3528ms 200 application/json
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[1]
Request starting HTTP/1.1 GET https://proxied-server:5002/signal/satellite
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]
Request finished in 2.3958ms 400 text/plain
You should not need to set RequestUri directly 1,2:
app.RunProxy(context => context
.ForwardTo(redirectTo)
.CopyXForwardedHeaders()
.AddXForwardedHeaders()
.Send());
Explanation
ProxyKit already handles context.Request.QueryString in ForwardTo 3:
var uri = new Uri(UriHelper.BuildAbsolute(
upstreamHost.Scheme,
upstreamHost.Host,
upstreamHost.PathBase,
context.Request.Path,
context.Request.QueryString));
References
https://github.com/proxykit/ProxyKit#12-forward-http-requests
https://github.com/proxykit/ProxyKit#233-copying-x-forwarded-headers
https://github.com/proxykit/ProxyKit/blob/4468429f3910111b98d80821fd10824204573053/src/ProxyKit/HttpContextExtensions.cs#L21-L26

ASP.NET 4.5 Rest API's work in Unity Android/iOS build but fails with "Unknown error" in Unity WebGL build

I have scoured every possible forum for this and somehow have not gotten my WebGL to consume my ASP.NET 4.5 REST API's.
From what I can tell it is possibly related to WebGL requiring CORS, but even enabling this I cannot get the game to communicate with my API's
So either there's something wrong with the way I have implemented global CORS settings in ASP.NET or something else is breaking.
To be clear these API's are running perfectly well on Android/iOS/Windows builds and even in the editor.
What I have done so far:
Installed the Microsoft CORS build as recommended by Microsoft's documentation relating to it, then added the following code to the WebAPIConfig class in Visual Studio:
public static void Register(HttpConfiguration config)
{
config.SuppressDefaultHostAuthentication();
config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType));
// Web API routes
config.MapHttpAttributeRoutes();
////new code
config.EnableCors(new EnableCorsAttribute("*", "*", "*"));
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
}
This is also in my web.config:
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
<add name="Access-Control-Allow-Headers" value="Origin, X-Requested-With, Content-Type, Accept" />
</customHeaders>
</httpProtocol>
I need these settings global so I used the "*" as indicated by the documentation to include all domains, method types, and headers because I use ASP.NET token authentication for my API.
Here is a code snippet that gets the token in the Unity project (just to be clear, this works on other platforms, only throws an error in a WebGL build)
public IEnumerator login()
{
string url = API.ROUTEPATH + API.TOKEN;
WWWForm form = new WWWForm();
form.AddField("grant_type", "password");
form.AddField("username", API.APIUSERNAME);
form.AddField("password", API.APIPASSWORD);
UnityWebRequest uwr = UnityWebRequest.Post(url, form);
uwr.SetRequestHeader("Content-Type", "application/json");
yield return uwr.SendWebRequest();
try
{
if (uwr.isNetworkError)
{
Debug.Log(uwr.error);
}
else
{
APIAuthToken returnauth = JsonUtility.FromJson<APIAuthToken>(uwr.downloadHandler.text);
if (!string.IsNullOrEmpty(returnauth.access_token))
{
API.hasAuth = true;
API.token = returnauth.access_token;
Debug.Log(returnauth.access_token);
}
}
}
catch
{
}
}
uwr.error produces the following, very helpful error: Unknown Error So I'm not even sure if it is CORS related, it's just my best guess based on the research I have done, but even with multiple different implementations of it I still sit with the same error. So if it's not a problem with the API's and with my Unity code please just ignore the ASP.NET code snippet.
cURL - A simple curl -I <endpoint> or curl -X OPTIONS -v <endpoint> can reveal a ton of information about what is happening related to CORS. It can allow you to set different origins, check preflight responses, and more.
"Let's say you have a backend API that uses cookies for session management. Your game works great when testing on your own domain, but breaks horribly once you host the files on Kongregate due to the fact that your API requests are now cross-domain and subject to strict CORS rules."
Is this your problem?
Problably on both sides if things are not set up properly will refuse to send cookies, but its good, its mean you have the control to allow what domains your sessions cookies will be sent to.
So probably you need first to configure the server to allow multiplies origins but make sure to validate the value against a whitelist so that you aren't just enabling your session cookies to be sent to any origin domain.
Example on a Node Express with CORS middleware(game ID 12345) and an origin whitelist below:
express = require('express')
var cors = require('cors')
var app = express()
var whitelist = ['https://game12345.konggames.com'];
var corsOptions = {
credentials: true,
origin: function (origin, callback) {
if (whitelist.indexOf(origin) !== -1) {
callback(null, true)
} else {
callback(new Error('Not allowed by CORS'))
}
}
};
app.use(cors(corsOptions));
app.options('*', cors(corsOptions)); // Enable options for preflight
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(8080, () => console.log(`Example app listening on port 8080!`))
cURL command to check the headers for an OPTIONS preflight request from an origin in the whitelist array:
curl -X OPTIONS -H"Origin: https://game12345.konggames.com" -v http://localhost:8080/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
> OPTIONS / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.58.0
> Accept: */*
> Origin: https://game12345.konggames.com
>
< HTTP/1.1 204 No Content
< X-Powered-By: Express
< Access-Control-Allow-Origin: https://game12345.konggames.com
< Vary: Origin, Access-Control-Request-Headers
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE
< Content-Length: 0
< Date: Tue, 24 Sep 2019 22:04:08 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
instruct the client to include cookies when it makes a cross-domain request,If the preflight response did not include Access-Control-Allow-Credentials: true, or if your Access-Control-Allow-Access is set to a wildcard (*) then the cookies will not be sent and you are likely to see errors in your browser's Javascript console:
Access to XMLHttpRequest at 'https://api.mygamebackend.com' from origin 'https://game54321.konggames.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.
Unity's UnityWebRequest and the older WWW classes use XMLHttpRequest under the hood to fetch data from remote servers. Since there is no option to set the withCredentials flag to true, we have to perform a pretty dirty hack when initializing our application in order to turn that on for the appropriate requests.
In your WebGL template or generated index.html:
<script>
XMLHttpRequest.prototype.originalOpen = XMLHttpRequest.prototype.open;
var newOpen = function(_, url) {
var original = this.originalOpen.apply(this, arguments);
if (url.indexOf('https://api.mygamebackend.com') === 0) {
this.withCredentials = true;
}
return original;
}
XMLHttpRequest.prototype.open = newOpen;
</script>
This snippet of code overrides the open method of XMLHttpRequest so that we can conditionally set withCredentials equal to true when desired. Once this is in place, cross-origin cookies should begin working between the Kongregate-hosted iframe domain and the game's backend servers!
info taken from here
also looks nice for this

Getting java.net.SocketException: Connection reset, during making http4 call in Apache camel route

I Am getting below exception from target server. I am migrating camel http component to http4 component.
2018-11-27 01:17:34,156 | ERROR | ernal.req.queue] | DefaultErrorHandler | 81 - org.apache.camel.camel-core - 2.16.3 | Failed delivery for (MessageId: ID:ip-172-16-11-197-60068-1543219848548-41:1:1:1:1 on ExchangeId: ID-ip-172-16-11-197-36927-1543219848013-31-14). Exhausted after delivery attempt: 1 caught: java.net.SocketException: Connection reset
HTTP4 Request headers(not working flow) -
POST /GenieWebService/documentgenerator.asmx HTTP/1.1
Connection: Keep-Alive
Content-Length: 28955
Content-Type: text/xml; charset=utf-8
Accept-Encoding: gzip,deflate
Authorization: Basic ZXNidXNlcjpwYXNzd29yZA==
Host: xmz.ab.a.aaa(This is not actual ip)
User-Agent: Apache-HttpClient/4.5.1 (Java/1.7.0_79)
breadcrumbId: ID-ip-abc-ad-ac-abc-36927-15432193242455-7-90
EXTR_SERVICE_URL: http://abs.ab.a.abc/GenieWebService/documentgenerator.xmzx
X-Forwarded-For: 111.93.62.106, 165.225.106.106
X-Forwarded-Port: 80
X-Forwarded-Proto: http
HTTP request headers (Working flow)
POST /GenieWebService/documentgenerator.asmx HTTP/1.1
Content-Length: 66831
Content-Type: text/xml;charset=utf-8
Authorization: Basic ZXNidXNlcjpwYXNzd29yZA==
Cookie: ASP.NET_SessionId=lairgud4nkaledjpuzarfqj0
Host: abc.ab.a.abc
User-Agent: Jakarta Commons-HttpClient/3.1
breadcrumbId: ID-ip-abc-ab-a-abc-34162-1542796372800-25-80
EXTR_SERVICE_URL: http://abs.ab.a.abc/GenieWebService/documentgenerator.asmx
X-Forwarded-For: 112.196.86.34, 165.225.106.89
X-Forwarded-Port: 80
X-Forwarded-Proto: http
As per my understanding issue seems with cookies headers. In http4 request headers i am not able to see cookies header while it is in http headers.
Can someone plz help here what is the issue and if it is with cookies headers then how I can add cookies. I am using spring dsl and apache-camel 2.16.3 version.
You can set a 100ms timeout on the client by specifying http4://foo?httpClient.soTimeout=100. When a timeout occurs it will probably throw an exception that you can handle like so:
onException(IOException.class).to("direct:timeouts");
from("direct:start")
.setHeader(Exchange.HTTP_QUERY,simple("format=json&count=${in.headers.count}"))
.to("http4://www.host.com/someapi?httpClient.soTimeout=100")
.unmarshal().json(JsonLibrary.JACKSON,MyResponseType.class)
.to("bean:SomeBean?method=echo");
from("direct:timeouts").to("...");
I have used httpClient.socketTimeout=180000 and not using bridgeEndpoint option so camel will use http url that i am setting in headers. Below are the my route.
<route>
<from uri="{{document-service.esb.new.document.internal.req.queue.name}}"/>
<convertBodyTo type="javax.xml.transform.dom.DOMSource" charset="utf-8"/>
<process ref="document-serviceHttpUrlResolver"/>
<removeHeaders pattern="CamelHttp*"/>
<removeHeaders pattern="JMS*"/>
<setHeader headerName="SOAPAction">
<xpath resultType="java.lang.String">function:properties('document-service.esb.new_genie.document.service.soap.action')</xpath>
</setHeader>
<setHeader headerName="CamelHttpUri">
<xpath resultType="java.lang.String">in:header('EXTR_SERVICE_URL')</xpath>
</setHeader>
<!--CamelHttpUri value will be - http://xyz.xy.x.bcd/WebGenie/DocumentGenerator.svc-->
<setHeader headerName="Content-Type">
<xpath resultType="java.lang.String">function:properties('http.content.type.value')</xpath>
</setHeader>
<process ref="document-serviceSoapWrapper"/>
<log message="*** A request message sent to New Genie service: ${body}" loggingLevel="INFO" logName="document-service"/>
<to uri="http4://abc.ab.a.abc/GenieWebService/documentgenerator.asmx?httpClient.socketTimeout=180000"/>
<convertBodyTo type="javax.xml.transform.dom.DOMSource" charset="utf-8"/>
<to uri="{{document-service.esb.new.document.internal.resp.queue.name}}{{document-service.esb.internal.queue.parameters}}" pattern="InOnly"/>
</route>

Resources