We have the JFrog Artifactory set up behind an enterprise firewall (Fortigate WAF). The FortiGate is using certificate for authentication.
The CI pipeline put the images on GCR, and we want to use the Artifactory Remote Docker registry to proxy the image into the on-prem zone.
When we set up Artifactory proxies, there's only user name and password option for authentication to proxy, how to set up the proxy to use certificate for authentication?
I believe you need to follow the below steps to add the certificate for authentication purposes. (Assuming your Artifactory version is 7.x)
Navigate to UI --> Administration --> Services | Artifactory --> Security | Certificates
Add the pem file in the certificate with an alias name to it.
Add the certificate details in the repository's SSL/TLS certificate section in the basic tab
The authentication via certificate should happen without issues.
Related
For install nginx-plus package I should add certificate (https://docs.nginx.com/nginx/admin-guide/installing-nginx/installing-nginx-plus/#installing-nginx-plus-on-debian-and-ubuntu). I want create mirror repository, but I don't understand where I should add certificate for auth.
When configuring the remote repository in Artifactory which will be used to proxy the NginX Debian repository, you can add the SSL certificate in the Advanced Settings tab. Inside the Remote Authentication section you will find a field for setting the SSL/TLS certificate this repository should use for authentication to the remote resource for which it is a proxy.
I am looking about certificates based authentication instead of username and password in artifactory docker registry logging.
The answer is that Artifactory don't have this capability, however, as you will have to implement reverse proxy in front of Artifactory for working with Docker you might be able to implement this capability there.
I am facing the following issue.
1 - I have deployed WCF service with SSL enabled on remote IIS machine and trying to use it from my web client.The problem is my browser is not allowing this service to be called.Before using my web client i have to hit the service url from my browser directly and then allow the certificate.
2- Got suggestion from somewhere to export the certificate on the machine where WCF is deployed and include that certificate file in Trusted certificates group on my machine.After I did that I got same problem when tried to access web service from web client.So I hit the service url from browser and got the same page which needs me to trust the certificate with a different message that "You attempted to reach 111.121.196.226(ip address of the WCF machine), but instead you actually reached a server identifying itself as "WMSvc-domain" where "WMSvc-domain" is the value of "Issued To" field in the certificate.
I hope I have made myself clear.Waiting for suggestions.Thank you.
WMSvc-machinename is the IIS Windows Management Service which runs by default on 8172/tcp and is used for remotely managing iis. When installed the default is to create a self-signed certificate. That wouldn't trusted. It could be replaced with a "proper" CA signed cert through the Management Service icon in IIS Manager.
Is there a way to automatically deploy a .NET/Windows based Amazon Elastic Beanstalk instance with an SSL cert?
I already have the DNS for the domain in the SSL cert setup to point to the Beanstalk instance.
I can remote in and configure the server manually but I was wondering if there is a way to make it part of the deployment package (similar to what Windows Azure has).
If this isn't built in to Elastic Beanstalk, are there any hooks to run PowerShell scripts after deployment (or update) of my instance?
The AWS Elastic Beanstalk Developer Guide explains how to enable an SSL certificate for your Elastic Beanstalk environment.
The relevant part is:
Controlling the HTTPS port
Elastic Load Balancing supports the HTTPS/TLS protocol to enable
traffic encryption for client connections to the load balancer.
Connections from the load balancer to the EC2 instances are done using
plaintext. By default, the HTTPS port is turned off.
To turn on the HTTPS port
Create and upload a certificate and key to the AWS Access and Identity Management (AWS IAM) service. The IAM service will store the
certificate and provide an Amazon Resource Name (ARN) for the SSL
certificate you've uploaded. For more information creating and
uploading certificates, see the Managing Server Certificates section
of Using AWS Identity and Access Management.
Specify the HTTPS port by selecting a port from the HTTPS Listener Port drop-down list.
In the SSL Certificate ID text box, enter the Amazon Resources Name (ARN) of your SSL certificate (e.g.,
arn:aws:iam::123456789012:server-certificate/abc/certs/build). Use the
SSL certificate that you created and uploaded in step 1. For
information on viewing the certificate's ARN, see Verify the
Certificate Object topic in the Creating and Uploading Server
Certificates section of the Using IAM Guide.
I have a website on our Internal network that is also accessible to the public. I have purchased and installed an SSL certificate for that public site. The site is available using both https://site.domain.com (Public) and https://site.domain.local (Internal).
The problem I am having is creating and installing a self-signed certificate for the internal "site.domain.local" so that people on our internal network do not get the security warning. I have a keystore in the root folder and also created a self-signed certificate in that keystore with no luck. The public key is working just fine. I am running Debian linux with Tomcat 7 installed and I am also using Active Directory on the network with Microsoft DNS. Any and all help would be greatly appreciated. If you need more details, please ask.
Not sure I fully understand your set-up, but you could front your Tomcat with Apache, install the cert on the Apache instance and then do a Reverse-Proxy (plain http) to your Tomcat instance. People would access the Apache instance which would handle the SSL connection.
One way would be to add the CA certificate in every client certificate trusted store (which is not convenient) : the client click on the certificate warning message and install/trust the self signed x509 CA certificate. If this doesn't work, there is a problem with the certificate (though most openssl generated stuff .CER/.CRT/.P12/.PFX will install with no problem under recent windows).
If one client accepts the self-signed certificate with manual setup, you can try to install these certificates with Active Directory ; basically you add trusted CA cert within your AD, and client automagically synchronize (nb: mostly on login) : See there for a hint about setting thing up with AD : http://support.microsoft.com/kb/295663/en-us (You may try this or dig in that direction : with AD, you never know).
Another possibility would be to set up your internal DNS to point site.domain.com to the local web site address (the easy way). You can test this setup with you /etc/hosts file on linux/unix flavours (or system32/drivers/etc/hosts on windows flavours)
If your certificate is for site.domain.com and users are going to site.domain.local and getting that cert, then clearly there is a name mismatch and the browser will always warn you.
You either need to :
get the cert regenerated with BOTH names
get a cert for just the internal site
mangle DNS so that when your internal users go to site.domain.com
they get the IP address of site.domain.local.