Connect to mqtt.googleapis.com:8883 via proxy and another domain - nginx

For some reasons our infra blocks mqtt.googleapis.com. That's why was deployed nginx proxy with such configuration
stream {
upstream google_mqtt {
server mgtt.googleapis.com:8883;
}
server {
listen 8883;
proxy_pass google_mqtt;
}
}
Also it has external IP with domain name fake.mqtt.com
Using example here I'm testing connectivity.
If script to run against mgtt.googleapis.com:8883 everything works fine.
But if domain switch to fake.mqtt.com got an error:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'fake.mqtt.com'
For client implementation was used paho.mqtt.client.
Auth to mqtt broker realized with JWT.
def create_jwt(project_id, private_key_file, algorithm):
token = {
# The time that the token was issued at
"iat": datetime.datetime.utcnow(),
# The time the token expires.
"exp": datetime.datetime.utcnow() + datetime.timedelta(minutes=20),
# The audience field should always be set to the GCP project id.
"aud": project_id,
}
# Read the private key file.
with open(private_key_file, "r") as f:
private_key = f.read()
print(
"Creating JWT using {} from private key file {}".format(
algorithm, private_key_file
)
)
return jwt.encode(token, private_key, algorithm=algorithm)
Set JWT
client.username_pw_set(
username='unused',
password=create_jwt(project_id, private_key_file, algorithm))
TLS configuration:
client.tls_set(ca_certs='roots.pem', tls_version=ssl.PROTOCOL_TLSv1_2,)
Could you advise what to configure on nginx/paho-client side and is it working solution at all?
Or may be 3party brokers can connect to mqtt.googleapis.com? (from information i read here and on another resources - no)

You can not just arbitrarily change the domain name if you are just stream proxying, it needs to match the one presented in the certificate by the remote broker or as you have seen it will not validate.
You can force the client to not validate the server name by setting client.tls_insecure_set(True) but this is a VERY bad idea and should only be used for testing and never in production.

Related

Disable client certificate validation in IIS 10 for an Asp.net website but allow app to request incoming client certificate

I have an Asp.net API website which does custom client certificate validation. When hosting this website on IIS 10, I get the following from failed request logs when I call my API.
A certificate chain processed, but terminated in a root certificate
which is not trusted by the trust provider.
My web.config has
<configuration>
<system.webServer>
<access sslFlags="Ssl, SslRequireCert" />
</system.webServer>
and in applicationHost.config I have
<section name="access" overrideModeDefault="Allow" />
What am I missing here? How do I configure IIS to just pass through the certificate and not validate it ?
The reason I want to do this is because, this is a test environment and I want to trust all clients who calls my API with their self-signed certificates. I will internal do the validation of the certificate inside my API.
Note: I hosted the same website on Azure AppService and set "Incoming client certificates" to ON. It worked like a charm. So, what is the difference when I host it on my machine IIS ?
We use Client Certificates to validate hardware devices connecting to our API. For context, our devices are provisioned with an SSL cert at manufacture, and that cert is self signed by us. When a device out in the wild attempts to connect to our API, we handle the client certificate validation within the .NET API application itself.
This requires the following IIS SSL settings, and also a manual step to rebind the SSL binding (which we do for a very specific technical limitation).
So firstly, within the web.config file we have this config:
<security>
<access sslFlags="Ssl" />
</security>
If we add the SslNegotiateCert or SslRequireCert sslFlags, then IIS attempts to validate the client certificate before our application code is even called. So we set only the Ssl flag.
Secondly, in the SSL settings of the IIS site we set:
Require SSL [x]
Client Certificate:
[x] Ignore
[ ] Accept
[ ] Require
So essentially we aren't asking IIS to negotiate the client certificates on our behalf.
The final configuration change we make is to Enable "Negotiate Client Certificate" on the SSL binding. By default, when you create an SSL binding in IIS the "Negotiate Client Certificate" property is set to false.
From my understanding this means that IIS will not negotiate client certificates on the initial TLS negotiation. What would happen is when client certificates are required, a TLS renegotiation is triggered, and the server would request a client certificate from the client.
In our case, our devices pass the client certificate on the initial request, and will not handle a TLS renegotiation. So, by Enabling "Negotiate Client Certificate" then client certificates can be passed in the initial request.
So rebind the SSL binding takes some command line magic to find the current binding, delete it, and readd the binding this time with "Negotiate Client Certificate" enabled.
Step 1 - Find your SSL binding:
Run the following command in a CMD terminal:
netsh http show sslcert > sslcerts.txt
This will push all details of your current SSL bindings into sslcerts.txt
The file will looks like the following:
SSL Certificate bindings:
Hostname:port : yourhostname:443
Certificate Hash : your_certificate_hash
Application ID : {your_applicationID_Guid}
Certificate Store Name : My
Verify Client Certificate Revocation : Enabled
Verify Revocation Using Cached Client Certificate Only : Disabled
Usage Check : Enabled
Revocation Freshness Time : 0
URL Retrieval Timeout : 0
Ctl Identifier : (null)
Ctl Store Name : (null)
DS Mapper Usage : Disabled
Negotiate Client Certificate : Disabled
Note, your sslcerts.txt file will contain many instances of these bindings. You need to find the correct one for the application/site you are working with.
Note also the above output shows "Negotiate Client Certificate : Disabled"
Step 2 - Delete the current binding
Run the following command to delete the current binding
netsh http delete sslcert hostnameport=yourhostname:443
This will delete the SSL binding for the site.
Step 3 - Rebind the SSL with "Negotiate Client Certificate" enabled
Run the following command at the CMD prompt:
netsh http add sslcert hostnameport=yourhostname:443 certhash=your_certificate_hash appid={your_applicationID_Guid} certstorename=MY verifyclientcertrevocation=Enable VerifyRevocationWithCachedClientCertOnly=Disable UsageCheck=Enable clientcertnegotiation=Enable
Note here you are filling in the properties of the binding from the details you retrieved in sslcerts.txt, except you are setting clientcertnegotiation=Enable
Now we have an IIS Application which will negotiate for a client certificate up front, but it will not validate it, and allow us to validate it in code.
We then use an AuthorizationFilterAttribute to grab the client certificate and validate it based on our rules.
public class ValidateDeviceClientCertificateAttribute : AuthorizationFilterAttribute
{
public override void OnAuthorization(HttpActionContext actionContext)
{
X509Certificate2 cert = actionContext.Request.GetClientCertificate();
// Validation rules here i.e. check Hash of the signing cert, does it match your accepted value?
}
}
In our validation we have a known Intermediate CA that we use to sign our device certificates, so we check to ensure that the client certificate was signed by that Intermediate Cert, or at least one of our device signing intermediate certificates.

BizTalk 2016: How to use HTTP Send adapter with API token

I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).

nginx: decrypting a query param

I have 2 servers. Server A is a Windows server running ASP.NET, and server B is a Linux server running Nginx. I need to redirect a user from Server A to Server B securely. I would like to have Server A encrypt a value like ip=132.65.78.4;user=xyz#example.com;node=abc into a query parameter of a redirect like this: https://serverb.example.com?encrypted=<encrypted value here>
Then have Server B (using a shared secret) decrypt the query param, validate the IP address the user is coming from, and then trust the values of user and node to process the request.
How can I configure nginx to do this? I can figure out the Server A part myself based on the answer. Thank you!
I would recommend making use of the "nginx lua" module, which will let you modify portions of the request with Lua code.
There are facilities in there to specifically modify the query string, so you can perform your encryption and set the "encrypted" value.
https://github.com/openresty/lua-nginx-module#ngxreqset_uri_args
In the case where you want to process request arguments, you can do this via a set_by_lua_block or set_by_lua_file
So perhaps you might do something like:
set_by_lua_block $validated {
local enc = ngx.var.arg_encrypted
local decrypted = decrypt(enc)
return do_some_validation(decrypted) and "1" or "0"
}
if ($validated = "0") {
return 403;
}

AspNet.Security.OpenIdConnect.Server (ASP.NET vNext) Authority Configuration in Mixed http/https Environments

I am using Visual Studio 2015 Enterprise and ASP.NET vNext Beta8 to build an endpoint that both issues and consumes JWT tokens as described in detail here. As explained in that article the endpoint uses AspNet.Security.OpenIdConnect.Server (AKA OIDC) to do the heavy lifting.
While standing this prototype up in our internal development environment we have encountered a problem using it with a load balancer. In particular, we think it has to do with the "Authority" setting on app.UseJwtBearerAuthentication and our peculiar mix of http/https. With our load balanced environment, any attempt to call a REST method using the token yields this exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
Consider the following steps to reproduce (this is for prototyping and should not be considered production worthy):
We created a beta8 prototype using OIDC as described here.
We deployed the project to 2 identically configured IIS 8.5 servers running on Server 2012 R2. The IIS servers host a beta8 site called "API" with bindings to port 80 and 443 for the host name "devapi.contoso.com" (sanitized for purposes of this post) on all available IP addresses.
Both IIS servers have a host entry that point to themselves:
127.0.0.1 devapi.contoso.com
Our network admin has bound a * certificate (*.contoso.com) with our Kemp load balancer and configured the DNS entry for https://devapi.contoso.com to resolve to the load balancer.
Now this is important, the load balancer has also been configured to proxy https traffic to the IIS servers using http (not, repeat, not on https). It has been explained to me that this is standard operating procedure for our company because they only have to install the certificate in one place. We're not sure why our network admin bound 443 in IIS since it, in theory, never receives any traffic on this port.
We make a secure post via https to https://devapi.contoso.com/authorize/v1 to fetch a token, which works fine (the details of how to make this post are here ):
{
"sub": "todo",
"iss": "https://devapi.contoso.com/",
"aud": "https://devapi.contoso.com/",
"exp": 1446158373,
"nbf": 1446154773
}
We then use this token in another secure get via https to https://devapi.contoso.com/values/v1/5.
OpenIdConnect.OpenIdConnectConfigurationRetriever throws the exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
We think this is happening because OIDC is attempting to consult the host specified in "options.Authority", which we set at startup time to https://devapi.contoso.com/. Further we speculate that because our environment has been configured to translate https traffic to non https traffic between the load balancer and IIS something is going wrong when the framework tries to resolve https://devapi.contoso.com/. We have tried many configuration changes including even pointing the authority to non-secure http://devapi.contoso.com to no avail.
Any assistance in helping us understand this problem would be greatly appreciated.
#Pinpoint was right. This exception was caused by the OIDC configuration code path that allows IdentityModel to initiate non-HTTPS calls. In particular the code sample we were using was sensitive to missing trailing slash in the authority URI. Here is a code fragment that uses the Uri class to combine paths in a reliable way, regardless of whether the Authority URI has a trailing slash:
public void Configure(IApplicationBuilder app, IOptions<AppSettings> appSettings)
{
.
.
.
// Add a new middleware validating access tokens issued by the OIDC server.
app.UseJwtBearerAuthentication
(
options =>
{
options.AuthenticationScheme = JwtBearerDefaults.AuthenticationScheme ;
options.AutomaticAuthentication = false ;
options.Authority = new Uri(appSettings.Value.AuthAuthority).ToString() ;
options.Audience = new Uri(appSettings.Value.AuthAuthority).ToString() ;
// Allow IdentityModel to use HTTP
options.ConfigurationManager =
new ConfigurationManager<OpenIdConnectConfiguration>
(
metadataAddress : new Uri(new Uri(options.Authority), ".well-known/openid-configuration").ToString(),
configRetriever : new OpenIdConnectConfigurationRetriever() ,
docRetriever : new HttpDocumentRetriever { RequireHttps = false }
);
}
);
.
.
.
}
In this example we're pulling in the Authority URI from config.json via "appSettings.Value.AuthAuthority" and then sanitizing/combining it using the Uri class.

Deny access if the client is using a different SSL certificate

I have scoured this forum the best I could but found no plausible answer, google was no help either.
I have a FLEX 3 application using AMFPHP over HTTPS (Flex RemoteObject). I would like to prevent the client from making any HTTPS requests if the browser client SSL cert is not the one provided by my server, thus making it more difficult for Charles, Burp, etc. to read the data going to the server by proxying the connection.
When someone uses one of these proxy server there is a certificate error as i.e. Charles provides its own cert to the browser and makes the HTTPS connection to the server as a normal client, so on the server end there is no difference.
Is there any way to only allow connections if my cert is the one being used at the client?
Using SecureSockets (needed to upgrade to SDK 4.6) I was able to check the validity of the SSL certificate the browser was using.
The default behavior is that any incorrect cert (analog to the browser cert warning) will cause SecureSockets to fail. This makes creating a check very easy using the sample code in the Adobe documentation:
private var secureSocket:SecureSocket = new SecureSocket();
public function SecureSocketExample()
{
secureSocket.addEventListener( Event.CONNECT, onConnect )
secureSocket.addEventListener( IOErrorEvent.IO_ERROR, onError );
try
{
secureSocket.connect( "ip address here", 443 );
}
catch ( error:Error )
{
trace ( error.toString() );
}
}
Doc is here
Adding this to the creationComplete listener is enough to make sure the user hasn't followed an insecure cert or man in the middle. The rest of the application's communication can occur over "normal" SSL AMF channel.
One generic approach could be to use Strict Transport Security

Resources