Jfrog Xray Certificate Issue - artifactory

Summary:
Xray helm chart needs the capability to receive a custom certificate used for Artifactory and apply that certificate to the router container.
Detail:
We have successfully installed Artifactory via helm. After installing, we configured TLS for the Artifactory web application using a custom certificate. When trying to deploy Xray via helm, the xray server pod’s router container will continually fail to connect to Artifactory with an error message of
Error: Get https://[url redacted]/access/api/v1/system/ping: x509: certificate signed by unknown authority
There does not appear to be any way to pass in a secret that would contain the custom certificate. It looks like at this time the only option is to customize the helm chart to install the certificate in the container but that will put us out if sync with Jfrog to receive vulnerability database updates or any other updates x-ray request from Jfrog.
Edit - Someone is having the same issue. Can Xray even do what we are trying to do then?
Update from Jfrog Support:
With regards to the query on adding/importing the CA cert chain to Xray, we already have a Jira for the same and our team is currently woking on it. As a work around I would request you to mount a custom volume with the ssl cert. Then run the command to import the ssl cert to the cacerts file from the init container.
Work around :
Create a Kubernetes configMap, and add the root and subordinate CA, and then mount that into the xray-server at /usr/local/share/ca-certificates. I then log into the server, and do a docker exec -it -u root into the xray server (since the container runs as a non-root user) and then run the command update-ca-certificates to import the CA certs. Once you did this, then Xray will be able to talk to Artifactory.
The drawback of this workaround would be we need to run the above steps every time the container restarts.

Related

Jfrog Artifactory High availability and maintenance

We are using Jfrog artifactory selfhosted instance with license for our project and many customers are using for thir package and binary management.
Since this is hosted i our private selfhosted environments over linux platform, regularly we may need to have a maintenance window atleast 2 times in a month to apply patches to our servers and all. So we are considering for high availability for our currently running Jfrog instance which should resolve this downtime during the maintenance. Also we are looking for some better managemental scenarios as below and couldnt find any helpful guidance from the docs.
How the Jfrog server insance service status can be monitored along with auto restart if the service is in failed state after the server reboot.
Is there any way to set and populate a notification messsage to the sustomers regarding the sceduled maintenance.
How can we enable the high availability for JFrog Artifactory and Xray. ?
Here are some of the workaround you can follow to mitigate the situation
To monitor the health of the JFrog services you can use the below rest API
curl -u : -XGET
http://<Art_IP>:8046/router/api/v1/topology/health -H 'Content-Type:
application/json'
If you are looking for a more lightweight check you can use
curl -u: -XGET
http://<Art_IP>:8081/artifactory/api/system/ping
By default, the systemctl scripts check for the availability of the services and restart them when they see a failure. The same applies to the system restart as well.
There is no option for a pop-up message however, you can set a custom message as a banner in the Artifactory. Navigate to Administration -> General settings -> Customer message. Here is the wiki link
When you add another node to the mix, Artifactory/Xray becomes a cluster to balance the load (or as a failover) however it is the responsibility of the load balancer/Reverse proxy to manage the traffic between the cluster nodes according to the availability of the backend node.

How to set up a Remote Docker Registry behind a firewall (WAF)?

We have the JFrog Artifactory set up behind an enterprise firewall (Fortigate WAF). The FortiGate is using certificate for authentication.
The CI pipeline put the images on GCR, and we want to use the Artifactory Remote Docker registry to proxy the image into the on-prem zone.
When we set up Artifactory proxies, there's only user name and password option for authentication to proxy, how to set up the proxy to use certificate for authentication?
I believe you need to follow the below steps to add the certificate for authentication purposes. (Assuming your Artifactory version is 7.x)
Navigate to UI --> Administration --> Services | Artifactory --> Security | Certificates
Add the pem file in the certificate with an alias name to it.
Add the certificate details in the repository's SSL/TLS certificate section in the basic tab
The authentication via certificate should happen without issues.

Juju vault causing problems when deploying openstack/base on maas and charmed-kubernetes

I have deployed openstack/base on MaaS as indicated here. After I tried to deploy charmed-kubernetes with an openstack-integrator and vault overlay, I cannot perform openstackclient commands on the maas node and the images uploaded to the dashboard are not recognized, that means, the ubuntu charms cannot be deployed. When I do, for example,
openstack catalog list
I get
Failed to discover available identity versions when contacting https://keystone_ip:5000/v3. Attempting to parse version from URL.
SSL exception connecting to https://keystone_ip:5000/v3/auth/tokens: HTTPSConnectionPool(host='keystone_ip', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)')))
However, when I ssh into the keystone container, there is a keystone_juju_ca_cert.crt which has as
Issuer: CN = Vault Root Certificate Authority (charm-pki-local)
and as
Subject: CN = Vault Root Certificate Authority (charm-pki-local)
I have also tried to reissue the certificates and refresh the secrets through actions in the vault application, but to no avail.
Can somobody help me here ?
I don't know anything about juju or openstack, but it looks to me like the problem isn't on the keystone container, but on your local machine (or wherever you are running this openstack catalog list command. The local machine doesn't appear to have the charm-pki-local CA certificate installed, so it can't verify the connection to the keystone server.
You need to get root ca from vault using juju and then reference that file in openrc file as OS_CACERT environment variable

How add certificate for auth to repository

For install nginx-plus package I should add certificate (https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/#installing-nginx-plus-on-debian-and-ubuntu). I want create mirror repository, but I don't understand where I should add certificate for auth.
When configuring the remote repository in Artifactory which will be used to proxy the NginX Debian repository, you can add the SSL certificate in the Advanced Settings tab. Inside the Remote Authentication section you will find a field for setting the SSL/TLS certificate this repository should use for authentication to the remote resource for which it is a proxy.

Does artifactory provide certificate based authtication for docker registry

I am looking about certificates based authentication instead of username and password in artifactory docker registry logging.
The answer is that Artifactory don't have this capability, however, as you will have to implement reverse proxy in front of Artifactory for working with Docker you might be able to implement this capability there.

Resources