I am looking about certificates based authentication instead of username and password in artifactory docker registry logging.
The answer is that Artifactory don't have this capability, however, as you will have to implement reverse proxy in front of Artifactory for working with Docker you might be able to implement this capability there.
Related
We have the JFrog Artifactory set up behind an enterprise firewall (Fortigate WAF). The FortiGate is using certificate for authentication.
The CI pipeline put the images on GCR, and we want to use the Artifactory Remote Docker registry to proxy the image into the on-prem zone.
When we set up Artifactory proxies, there's only user name and password option for authentication to proxy, how to set up the proxy to use certificate for authentication?
I believe you need to follow the below steps to add the certificate for authentication purposes. (Assuming your Artifactory version is 7.x)
Navigate to UI --> Administration --> Services | Artifactory --> Security | Certificates
Add the pem file in the certificate with an alias name to it.
Add the certificate details in the repository's SSL/TLS certificate section in the basic tab
The authentication via certificate should happen without issues.
According to Artifactory documentation,
For best security, when using Artifactory behind a reverse proxy, it must be co-located on the same machine as the web server, and Artifactory should be explicitly and exclusively bound to localhost.
How can I configure Artifactory so that it is bound to localhost only?
As of Artifactory version 7.12.x, there are two endpoints exposed for accessing the application:
Port 8082 - all the Artifactory services (UI + API) via the JFrog router
Port 8081 - direct to the Artifactory service API running on Tomcat (for better performance)
The JFrog Router does not support specific binding configuration today.
Tomcat can controlled with setting a custom address="127.0.0.1" on the relevant connector.
Your best bet would be to simply close all ports on the server running your Artifactory and allow only entry to the web server's port. This is best practice anyway for security aware systems.
IMPORTANT:
If using other JFrog products like JFrog Xray or JFrog Pipelines, they rely on direct access to the Artifactory router, so your security rules should take that into consideration.
You can find a map of all JFrog platform ports on the official Wiki page.
I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)
Summary:
Xray helm chart needs the capability to receive a custom certificate used for Artifactory and apply that certificate to the router container.
Detail:
We have successfully installed Artifactory via helm. After installing, we configured TLS for the Artifactory web application using a custom certificate. When trying to deploy Xray via helm, the xray server pod’s router container will continually fail to connect to Artifactory with an error message of
Error: Get https://[url redacted]/access/api/v1/system/ping: x509: certificate signed by unknown authority
There does not appear to be any way to pass in a secret that would contain the custom certificate. It looks like at this time the only option is to customize the helm chart to install the certificate in the container but that will put us out if sync with Jfrog to receive vulnerability database updates or any other updates x-ray request from Jfrog.
Edit - Someone is having the same issue. Can Xray even do what we are trying to do then?
Update from Jfrog Support:
With regards to the query on adding/importing the CA cert chain to Xray, we already have a Jira for the same and our team is currently woking on it. As a work around I would request you to mount a custom volume with the ssl cert. Then run the command to import the ssl cert to the cacerts file from the init container.
Work around :
Create a Kubernetes configMap, and add the root and subordinate CA, and then mount that into the xray-server at /usr/local/share/ca-certificates. I then log into the server, and do a docker exec -it -u root into the xray server (since the container runs as a non-root user) and then run the command update-ca-certificates to import the CA certs. Once you did this, then Xray will be able to talk to Artifactory.
The drawback of this workaround would be we need to run the above steps every time the container restarts.
For install nginx-plus package I should add certificate (https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/#installing-nginx-plus-on-debian-and-ubuntu). I want create mirror repository, but I don't understand where I should add certificate for auth.
When configuring the remote repository in Artifactory which will be used to proxy the NginX Debian repository, you can add the SSL certificate in the Advanced Settings tab. Inside the Remote Authentication section you will find a field for setting the SSL/TLS certificate this repository should use for authentication to the remote resource for which it is a proxy.