We have a requirement to use two dns names(each with corresponding .pem and .key) for single jfrog artifactory instance.
Tried following the https://www.jfrog.com/confluence/display/JFROG/HTTP+Settings, but the setting allows only one public server name. Any suggestions on how to have two public server name each with their own .pem and .key for single jfrog artifactory instance.
To support multiple DNS names with JFrog platform with a reverse proxy, adding X-JFrog-Override-Base-Url Header will forward the Base URL into the JFrog platform, then Artifactory will use it for building the response.
e.g. for Nginx, you can add
proxy_set_header X-JFrog-Override-Base-Url https://$host;
Related
In plain nginx, I can use the nginx geo module to set a variable based on the remote address. I can use this variable in the ssl path to choose a different SSL certificate and key for different remote networks accessing the server. This is necessary because the different network environments have different CAs.
How can I reproduce this behavior in a Kubernetes nginx ingress? or even Istio?
You can customize the generated config both for the base and for each Ingress. I'm not familiar with the config you are describing but some mix of the various *-snippet configmap options (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#server-snippet) or a custom template (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/)
I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)
I got the situation where I have to configure different certificate for two different application on the nginx server. Both application request will be proxy from the nginx server to there respective running application ..
I have to configure this for same server name and same port.
Any suggestion will be appreciated here.
Thanks
You can't do this with stock NGINX, because ssl_certificateq cannot be set per-location.
You can achieve what you want by using Lua nginx module, using, in particular ssl_certificate_by_lua_block, writing logic for loading different SSL cert depending on current URI.
I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.
On my nginx server, I am going to use more than one of the geo ip databases (one for country+city and another one for isp or organization). I could not find a module for nginx and/or pecl to get more than one of these databases to run.
The database provider is not going to publish a single DB with all the data in one file), so it looks like i am lost.
http://wiki.processmaker.com/index.php/Nginx_and_PHP-FPM_Installation seems to work with one DB only.
It's possible with the standard built-in GeoIP nginx module:
http://nginx.org/en/docs/http/ngx_http_geoip_module.html
geoip_country CountryCity.dat;
geoip_city CountryCity.dat;
geoip_org Organization.dat;