I have some issues with Akka http configuration on the client side. I am trying to connect to a server which doesn't provide:
- a public signed certificate
- a certificate corresponding to the hostname
I don't have the hand on this nginx so I cannot change the server side configuration. I can only change the client side.
After lots of investigation on configuring SSL, I have found that I need to configure SSL options in application.conf at two different levels :
akka.ssl-config.ssl.loose.acceptAnyCertificate=true
akka.ssl-config.loose.disableHostnameVerification = true
and
ssl-config.loose.acceptAnyCertificate=true
ssl-config.loose.disableHostnameVerification = true
I have checked the configuration is fine with
log-config-on-start = "on"
The problem is that I still get error at the akka debug level (not very clear)
[ingestionApiClient-akka.actor.default-dispatcher-13] [akka://ingestionApiClient/user/StreamSupervisor-0/flow-216-1-unknown-operation] closing output
Looking at wireshark I have found that's a problem of certificate validation
TLSv1 Record Layer: Alert (Level: Fatal, Description: Certificate Unknown)
I suppose the JVM configuration is overiding all I have done so I also tried to follow this method to modify JVM SSL config :
Java SSL: how to disable hostname verification
No problem with configuring the SSLContext and passing it to akka http because I can set the default HttpsContext with
val sc = SSLContext.getInstance("TLS")
*...configuration...*
val customContext =HttpsContext(sc, sslParameters = Some(params))
Http().setDefaultClientHttpsContext(customHttpsContext)
But I cannot find anyway to configure the default hostname verifier. The Http class doesn't have any method like Http().setDefaultHostnameVerifier
This how I connect to the server
val dataIngestFlow = Http().outgoingConnectionTls(config.httpEndpointHost,config.httpEndpointPort)
How can I achieve this ? Thanks a lot for your help
I don't know which version of akka and akka-http you use but have you tried to set the configuration field akka.ssl-config.hostnameVerifierClass to your specific implementation of the HostNameVerifier interface?
The simplest verifier which accepts everything looks like this:
public static class AcceptAllHostNameVerifier implements HostnameVerifier {
#Override
public boolean verify(String s, SSLSession sslSession) {
return true;
}
}
I also got stuck in similar issue and was getting similar errors. with following code I was able to get through:
val trustStoreConfig = TrustStoreConfig(None, Some("/etc/Project/keystore/my.cer")).withStoreType("PEM")
val trustManagerConfig = TrustManagerConfig().withTrustStoreConfigs(List(trustStoreConfig))
val badSslConfig = AkkaSSLConfig().mapSettings(s => s.withLoose(s.loose
.withAcceptAnyCertificate(true)
.withDisableHostnameVerification(true)
).withTrustManagerConfig(trustManagerConfig))
val badCtx = Http().createClientHttpsContext(badSslConfig)
Http().superPool[RequestTracker](badCtx)(httpMat)
Related
For some reasons our infra blocks mqtt.googleapis.com. That's why was deployed nginx proxy with such configuration
stream {
upstream google_mqtt {
server mgtt.googleapis.com:8883;
}
server {
listen 8883;
proxy_pass google_mqtt;
}
}
Also it has external IP with domain name fake.mqtt.com
Using example here I'm testing connectivity.
If script to run against mgtt.googleapis.com:8883 everything works fine.
But if domain switch to fake.mqtt.com got an error:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'fake.mqtt.com'
For client implementation was used paho.mqtt.client.
Auth to mqtt broker realized with JWT.
def create_jwt(project_id, private_key_file, algorithm):
token = {
# The time that the token was issued at
"iat": datetime.datetime.utcnow(),
# The time the token expires.
"exp": datetime.datetime.utcnow() + datetime.timedelta(minutes=20),
# The audience field should always be set to the GCP project id.
"aud": project_id,
}
# Read the private key file.
with open(private_key_file, "r") as f:
private_key = f.read()
print(
"Creating JWT using {} from private key file {}".format(
algorithm, private_key_file
)
)
return jwt.encode(token, private_key, algorithm=algorithm)
Set JWT
client.username_pw_set(
username='unused',
password=create_jwt(project_id, private_key_file, algorithm))
TLS configuration:
client.tls_set(ca_certs='roots.pem', tls_version=ssl.PROTOCOL_TLSv1_2,)
Could you advise what to configure on nginx/paho-client side and is it working solution at all?
Or may be 3party brokers can connect to mqtt.googleapis.com? (from information i read here and on another resources - no)
You can not just arbitrarily change the domain name if you are just stream proxying, it needs to match the one presented in the certificate by the remote broker or as you have seen it will not validate.
You can force the client to not validate the server name by setting client.tls_insecure_set(True) but this is a VERY bad idea and should only be used for testing and never in production.
Is there any way to change a http server to https using the library http4s? (https://http4s.org/)
I found myself facing this same issue but I managed to solve it, here's the thing:
You need to look for the moment when you build your server, presumably with BlazeServerBuilder.
BlazeServerBuilder has the method "withSslContext(sslContext: SSLContext)" to enable SSL. Thus, all you need to do is create a SSLContext object and pass it to the server builder.
Remember that you will probably have to store your SSL certificate in a keystore using Java's keytool utility before using it.
SSL context and SSL certificate
How to create an SSL context with an SSL certificate is another question, but here is an interesting post that covers the process of getting a free certificate from Let's Encrypt, storing it in a keystore and using it from a Java application: Using Let's Encrypt certificates in Java applications - Ken Coenen — Ordina JWorks Tech Blog
And here's the code I used for creating a SSLContext in Scala:
val keyStorePassword: String = your_keystore_password
val keyManagerPassword: String = your_certificate_password
val keyStorePath: String = your_keystore_location
val keyStore = KeyStore.getInstance(KeyStore.getDefaultType)
val in = new FileInputStream(keyStorePath)
keyStore.load(in, keyStorePassword.toCharArray)
val keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
keyManagerFactory.init(keyStore, keyStorePassword.toCharArray)
val trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm)
trustManagerFactory.init(keyStore)
val sslContext = SSLContext.getInstance("TLS")
sslContext.init(keyManagerFactory.getKeyManagers, trustManagerFactory.getTrustManagers, new SecureRandom())
sslContext
I decided to build a web service(app) for Apache Spark with Apache Livy.
Livy server is up and running on localhost port 8998 according to Livy configuration defaults.
My test program is a sample application in Apache Livy documentation: https://livy.incubator.apache.org/docs/latest/programmatic-api.html
While creating LivyClient by LivyClientBuilder class,
client = new LivyClientBuilder().setURI(new
URI("http","user:info","localhost",8998,"","",""))
.build();
I got "URI is not supported by any registered client factories" exception:
Exception in thread "main" java.lang.IllegalArgumentException: URI 'http://%5Bredacted%5D#localhost:8998?#' is not supported by any registered client factories.
at org.apache.livy.LivyClientBuilder.build(LivyClientBuilder.java:155)
at Client.<init>(Client.java:17)
at Client.main(Client.java:25)
I found out client instance stays null in LivyClientBuilder class.
client = factory.createClient(uri, this.config);
factory is an instance of LivyClientFactory interface.
The only class which implements the interface is RSCClientFactory.
In RSCClientFactory we have this piece of code:
if (!"rsc".equals(uri.getScheme())) {
return null;
}
I've tried "rsc" instead of "http", this is the error:
2018-09-15 11:32:55 ERROR RSCClient:340 - RPC error.
java.util.concurrent.ExecutionException: javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
javax.security.sasl.SaslException: Client closed before SASL negotiation finished.
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41)
at org.apache.livy.rsc.rpc.Rpc$SaslClientHandler.dispose(Rpc.java:419)
at org.apache.livy.rsc.JobHandleImpl.get(JobHandleImpl.java:60)
at org.apache.livy.rsc.rpc.SaslHandler.channelInactive(SaslHandler.java:92)
at Client.main(Client.java:39)
Apache Livy is running on http://localhost:8998 then I think we need submit our jar file to this address, but I don't understand "rsc" there.
I would appreciate if anyone guides me about these problems.
You just need to pass your URL as a String:
LivyClient client = new LivyClientBuilder()
.setURI(new URI("http://localhost:8998"))
.build();
After that you can add your *.jar file with:
client.addJar("file://...yourPathToJarHere.../*.jar");
or
client.uploadJar(new File("...."));
It depends on your cluster configuration. You can find full java API description here: https://livy.incubator.apache.org/docs/latest/api/java/index.html
The only class which implements the interface is RSCClientFactory.
Just add livy-client-http to your classpath.
https://github.com/apache/incubator-livy/blob/412ccc8fcf96854fedbe76af8e5a6fec2c542d25/client-http/src/main/java/org/apache/livy/client/http/HttpClientFactory.java#L29
https://github.com/apache/incubator-livy/blob/56c76bc2d4563593edce062a563603fe63e5a431/examples/src/main/java/org/apache/livy/examples/PiApp.java#L79
Docs: https://livy.incubator.apache.org/docs/latest/programmatic-api.html
I am using Visual Studio 2015 Enterprise and ASP.NET vNext Beta8 to build an endpoint that both issues and consumes JWT tokens as described in detail here. As explained in that article the endpoint uses AspNet.Security.OpenIdConnect.Server (AKA OIDC) to do the heavy lifting.
While standing this prototype up in our internal development environment we have encountered a problem using it with a load balancer. In particular, we think it has to do with the "Authority" setting on app.UseJwtBearerAuthentication and our peculiar mix of http/https. With our load balanced environment, any attempt to call a REST method using the token yields this exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
Consider the following steps to reproduce (this is for prototyping and should not be considered production worthy):
We created a beta8 prototype using OIDC as described here.
We deployed the project to 2 identically configured IIS 8.5 servers running on Server 2012 R2. The IIS servers host a beta8 site called "API" with bindings to port 80 and 443 for the host name "devapi.contoso.com" (sanitized for purposes of this post) on all available IP addresses.
Both IIS servers have a host entry that point to themselves:
127.0.0.1 devapi.contoso.com
Our network admin has bound a * certificate (*.contoso.com) with our Kemp load balancer and configured the DNS entry for https://devapi.contoso.com to resolve to the load balancer.
Now this is important, the load balancer has also been configured to proxy https traffic to the IIS servers using http (not, repeat, not on https). It has been explained to me that this is standard operating procedure for our company because they only have to install the certificate in one place. We're not sure why our network admin bound 443 in IIS since it, in theory, never receives any traffic on this port.
We make a secure post via https to https://devapi.contoso.com/authorize/v1 to fetch a token, which works fine (the details of how to make this post are here ):
{
"sub": "todo",
"iss": "https://devapi.contoso.com/",
"aud": "https://devapi.contoso.com/",
"exp": 1446158373,
"nbf": 1446154773
}
We then use this token in another secure get via https to https://devapi.contoso.com/values/v1/5.
OpenIdConnect.OpenIdConnectConfigurationRetriever throws the exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
We think this is happening because OIDC is attempting to consult the host specified in "options.Authority", which we set at startup time to https://devapi.contoso.com/. Further we speculate that because our environment has been configured to translate https traffic to non https traffic between the load balancer and IIS something is going wrong when the framework tries to resolve https://devapi.contoso.com/. We have tried many configuration changes including even pointing the authority to non-secure http://devapi.contoso.com to no avail.
Any assistance in helping us understand this problem would be greatly appreciated.
#Pinpoint was right. This exception was caused by the OIDC configuration code path that allows IdentityModel to initiate non-HTTPS calls. In particular the code sample we were using was sensitive to missing trailing slash in the authority URI. Here is a code fragment that uses the Uri class to combine paths in a reliable way, regardless of whether the Authority URI has a trailing slash:
public void Configure(IApplicationBuilder app, IOptions<AppSettings> appSettings)
{
.
.
.
// Add a new middleware validating access tokens issued by the OIDC server.
app.UseJwtBearerAuthentication
(
options =>
{
options.AuthenticationScheme = JwtBearerDefaults.AuthenticationScheme ;
options.AutomaticAuthentication = false ;
options.Authority = new Uri(appSettings.Value.AuthAuthority).ToString() ;
options.Audience = new Uri(appSettings.Value.AuthAuthority).ToString() ;
// Allow IdentityModel to use HTTP
options.ConfigurationManager =
new ConfigurationManager<OpenIdConnectConfiguration>
(
metadataAddress : new Uri(new Uri(options.Authority), ".well-known/openid-configuration").ToString(),
configRetriever : new OpenIdConnectConfigurationRetriever() ,
docRetriever : new HttpDocumentRetriever { RequireHttps = false }
);
}
);
.
.
.
}
In this example we're pulling in the Authority URI from config.json via "appSettings.Value.AuthAuthority" and then sanitizing/combining it using the Uri class.
Hi I'm running into an issue where Symfony2 doesn't recognize the load balancer headers from Amazon AWS, which are need to determine if a request is SSL or not using the requires_channel: https security configuration.
By default Symfony2 $request->isSecure() looks for "X_FORWARDED_PROTO" but there's apparently no standard for this, and Amazon AWS load balancers use "HTTP_X_FORWARDED_PROTO".
I see the cookbook article for setting trusted proxies in config, but that's geared around whitelisting specific IP addresses and won't work with AWS, which generates dynamic IPs. Another feature, setting the framework config to include trust_proxy_headers: true is deprecated. This breaks my app by forcing endless redirects on the pages that require SSL-only.
You can now change the headers using setTrustedHeaderName(). This method allows you to change the four headers used throughout the file.
const HEADER_CLIENT_IP = 'client_ip'; // defaults 'X_FORWARDED_FOR'
const HEADER_CLIENT_HOST = 'client_host'; // defaults 'X_FORWARDED_HOST'
const HEADER_CLIENT_PROTO = 'client_proto'; // defaults 'X_FORWARDED_PROTO'
const HEADER_CLIENT_PORT = 'client_port'; // defaults 'X_FORWARDED_PORT'
The above, taken from the Request class indicate the keys available for use with the aforementioned method.
// $request is instance of HttpFoundation\Request;
$request->setTrustedHeaderName('client_proto', 'HTTP_X_FORWARDED_PROTO');
That said, at the time of writing, using "symfony/http-foundation": "2.5.*" the below code correctly determines whether or not the request is secure whilst behind an AWS Load Balancer.
// All IPs (*)
// $proxies = [$request->getClientIp()];
// Array of CIDR pools from load balancer
// EC2 -> Network & Security -> Load Balancers
// -> X -> Instances (tab) -> Availability Zones
// -> Subnet (column)
$proxies = ['172.x.x.0/20'];
$request->setTrustedProxies($proxies);
var_dump($request->isSecure()); // bool(true)
You're right the X_FORWARDED_PROTO header is hardcoded into HttpFoundation\Request while - as far as i know - overriding the request class in symfony is currently not possible.
There has been a discussion/RFC about this topic here and there is an open pull-request that solves this issue using a RequestFactory.