Euca 5.0 Enable SSL with Combined CLC and Cluster Controller? - eucalyptus

I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?

When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)

Related

Use ruby grpc client with self signed certificate

Trying to use ruby GRPC client to connect to a go GRPC server. The server uses TLS credentials with self signed certificates. I have trusted the certificate on my system (ubuntu 20.04) but still getting Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
Only way this is working is by manually setting GRPC::Core::ChannelCredentials.new(File.read(cert_path)) when initializing the client. Another workaround is setting :this_channel_is_insecure, but this only works if I remove TLS credentials in the server altogether (which I do not want).
Is there any way to get the GRPC client to work with the system certs?
I assume the gem is using roots.pem and trying to override that using GRPC::Core::ChannelCredentials.set_default_roots_pem results in Could not load any root certificate.
Also, I have not found any parameter that would let me skip certificate verification.
The default root location can be overridden using the GRPC_DEFAULT_SSL_ROOTS_FILE_PATH environment variable pointing to a file on the file system containing the roots. Setting GRPC::Core::ChannelCredentials.new(File.read(cert_path)) also seems fine to me.
In Ruby, most likely the feature to skip cert verification in TLS is not supported. We have the corresponding feature supported in underlying core, but it might not be plumbed to Ruby yet(at least not that I am aware of). If you need that, feel free to open a feature request to in gRPC Github page.
Thank you!

How to bind Artifactory to localhost only?

According to Artifactory documentation,
For best security, when using Artifactory behind a reverse proxy, it must be co-located on the same machine as the web server, and Artifactory should be explicitly and exclusively bound to localhost.
How can I configure Artifactory so that it is bound to localhost only?
As of Artifactory version 7.12.x, there are two endpoints exposed for accessing the application:
Port 8082 - all the Artifactory services (UI + API) via the JFrog router
Port 8081 - direct to the Artifactory service API running on Tomcat (for better performance)
The JFrog Router does not support specific binding configuration today.
Tomcat can controlled with setting a custom address="127.0.0.1" on the relevant connector.
Your best bet would be to simply close all ports on the server running your Artifactory and allow only entry to the web server's port. This is best practice anyway for security aware systems.
IMPORTANT:
If using other JFrog products like JFrog Xray or JFrog Pipelines, they rely on direct access to the Artifactory router, so your security rules should take that into consideration.
You can find a map of all JFrog platform ports on the official Wiki page.

Is it possible to setup multiple SSL on one Jelastic app?

I want to ask if the configuration to have multiple SSL on one IP in Jelastic is possible with Nginx Load Balancer.
The usage is for a proxy server that will receive request from multiple custom domains.
For example:
example-proxy.com points to a Public IP address assigned to a Jelastic Jetty Application.
Now custom domains points to the Jetty Application
custom-domain-example.com CNAME www points to example-proxy.com etc.
custom-domain-example-N.org CNAME www points to example-proxy.com etc.
Is it is possible to have this kind of configuration with Jelastic?
Is this possible to be done using the existing Jelastic API? Right now what I see in the API docs is BindSSL but it seems it can only bind one, is this correct?
Yes it's possible, but you need to configure it manually (just in nginx configs) instead of using the Jelastic dashboard/API SSL feature.
The other point to remember is that because there's 1 IP per container, multiple SSL certificates can only be served via SNI. That may have implications for you depending on what browsers your users use: in most cases it's ok now (old mobile OS and Windows XP are the primary exceptions)
The BindSSL API method allows you to automatically configure one SSL certificate on the externally facing node of your environment (Nginx Load Balancer in your case). If you attempt to BindSSL multiple times you just replace the existing certificate (not add multiple certificates).
Basically this functionality was built before SNI was widely acceptable, so it was assumed 1 SSL cert. per 1 environment. You can read more about SNI to make an informed decision about whether it will suit your needs here: http://blog.layershift.com/sni-ssl-production-ready/
An alternative for your needs would be to purchase a multi-domain SSL certificate (SAN cert). This lets you contain multiple hostnames within 1 certificate. Since you mentioned that you're our customer, you can contact our SSL team for details/pricing for this option.
If you still want to use multiple SSL certs + serve them via SNI, you will probably need to use the Read and Write API methods to save the SSL certificate parts and config. file(s) on your Nginx node.
Don't forget to restart the nginx service (you can use RestartNodeById for that) after any config. changes.
EDIT: As you mentioned that your end users will have control over this process, you probably prefer to use reload instead of restart (see http://nginx.org/en/docs/beginners_guide.html#control).
You can invoke that via Jelastic API using ExecCmdById, with commandList=[{"command": "sudo service nginx reload"}]
But take care if you're allowing end users to upload their own certificates via your application - you need to ensure that what they upload is really a certificate and nothing malicious...

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

Editing files on Google Cloud Engine VM

I have recently setup a VM on Google Cloud to develop and host my web site/application. The setup went fine, and I even have gcloud SDK up and running. I also have Apache installed and configured. My question is how do I setup my editing environment (PHP Storm) and upload my files? They seem to have the ports for FTP and SFTP blocked.
FTP uses a clear-text protocol and is thus not recommended. To use SFTP:
Make sure you can ssh to your instance: gcutil --project=<project> ssh <instance>. This does two things: (a) makes sure that port 22 is open on your VM, and (b) propagates your private key to the instance, if it's not already there.
Configure PHP Storm to use the Key pair authentication mechanism using the key ~/.ssh/google_compute_engine to log in to the instance.
That's it.

Resources