I'm doing a helm install of JFrog XRay and I'm running into a snag. We have Artifactory installed internally using a company self-signed cert and XRay won't come up because of it. The router component fails with 'certificate signed by unknown authority'. How do I get past this issue? Is there a method for injecting the root cert into the container?
I assume this is Xray 3.x installation connecting with Artifactory 7.x and if som based on this wiki, you need to pass the self-signed certificate under the location $JFROG_HOME/xray/var/etc/security/keys/trusted directory and as this is a helm based install and I think a k8 secrets should be created and passed in that mentioned location. Also, refer to this github.
As Xray is written in Go, you can pass the self-signed certificates by placing them in /etc/ssl/certs/ and restarting Xray.
Related
I am currently using Symfony with react and webpack to build an application. I use the Symfony CLI development server. I would like to turn this application into a Progressive Web Application (PWA) according to the cookbook at PWA Workshop. However, according to the information I'm gathering, a fully trusted SSL certificate is required for mobile testing, etc. and the use of mkcert is recommended (or maybe Let'sEncrypt). I've already followed the process to enable TLS on the Symfony CLI server. However, the certificates generated appear to be self-signed and are not a fully trusted. Is there a way either call the trusted mkcert certificates from the Symfony CLI server command line or reference them in Symfony config files so that the server uses them instead of the self-signed ones generated by the symfony server:ca:install command? (mkcert appears to generate two .pem files and other non-php development servers such as http-party/http-server can call them direct from the command line.) My work around is to configure my local apach2 server with the certificates, but I'd like to keep using the Symfony server for debugging.
UPDATE
I failed to mention that my development environment is WSL2 on windows 10. That seems to be the problem with getting trusted certificates to work. Since the browsers run in Windows and the servers run in WSL, the windows browsers don't accept the certificates. My current work around for mkcert and apache2 running on WSL is to:
Install mkcert on both WSL and Windows, Running mkcert -install in both WSL and Windows,
Copy the windows root certs from the Trusted Store created in Windows to the Trusted store created in WSL. You find these stores in both environments by running mkcert -CAROOT in the respective environments,
Run mkcert localhost 127.0.0.1 and add cert pair to a folder somewhere according to mkcert's instructions,
Then configure apache2 to use SSL and the mkcert cert pair according to instructions found all over the internet.
I found this workaround at mkcert solution.
However, this issue still remains for the Symfony Server. It is unclear where the certificates (root or otherwise) are installed when symfony server:ca:install is run and whether there is a way to make copies of those certificates such that the servers can be run in WSL and windows will accept them. Also, the Symfony docs don't indicate whether those certs are just self-signed or if they are trusted root certs like mkcert.
I am trying the dotnet restore command on the build step process. My nuget store is in Artifactory. Despite installing the corporate certificate, i am getting a SSL issue. My VSTS agent is running in a Ubuntu container on kubernetes
Retrying 'FindPackagesByIdAsyncCore' for source 'https://<artifactory url>/nuget/FindPackagesById()?id='Client.HostingStartup'&semVerLevel=2.0.0'.
The SSL connection could not be established, see inner exception.
The remote certificate is invalid according to the validation procedure.
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
Do we need to do any addition steps for the VSTS agent to pick up self-signed certificates?
If it is not working by installing the corporate certificate in the ubuntu container which hosts your VSTS agent. You can try below workarounds:
1, Create a Nuget service connection to authenticate Artifactory nuget server.
You can create the service connection by adding a new Credentials for feeds ... in the dotnet restore task. See below: Click + New
Then enter your credentials for Artifactory server. See below screeshot.
Another possible workaround is creating a Artifactory Service Connection and use Artifactory Nuget task to restore your packages. See Artifactory Azure DevOps Extension for more information.
Install JFrog Artifactory extension to your azure devops organziation. Then add Artifactory Nuget Task in your pipeline to restore your package.
Click + New from Artifactory Nuget task to add an Artifactory service connection.
I have set up Artifactory OSS Version 6.9.1 on an AWS instance behind an ELB and have been successfully deploying builds to it from GitLab CI/CD. I am now trying to set up a local Artifactory OSS Version 6.10.0 on my laptop so that I can develop builds locally before sharing with the team.
My local artifactory connects perfectly to JCentre and I can browse that repository.
My gradle build will happily connect to the AWS hosted artifactory at http://{URL}/artifactory and resolve my dependencies.
When I connect a remote repository with http://"{URL}/artifactory I get a 500 Internal Error message on Test. If I take off the /artifactory it says it has connected successfully but when I try to browse the remote repo it is empty.
I read Connect one Artifactory to another Artifactory and followed the instructions to edit the json configuration and make the remote repository a smart repository here https://www.jfrog.com/confluence/display/RTF/Smart+Remote+Repositories.
It now has the smart repository image but still cannot be browsed.
Gradle still cannot resolve dependencies with the local artifactory using the remote-repo name.
As stated in the Smart Remote Repository documentation, you should configure the remote repository URL with the following structure:
http://ARTIFACTORY_URL/api/package-type/repository-key
So if you have a Gradle repository named "gradle-test", the URL should be:
http://ARTIFACTORY_URL/api/gradle/gradle-test
Hope this helps.
In the end it turned out to be ridiculously simple. When setting up the remote repository the key needs to be in the url as well as the key field.
So for a repo with a repository key 'fractal' the connection URL is counter-intuitively http:///artifactory/fractal
How can we install JFrog Xray with our own application user instead of user and group xray on redhat standalone server?
xray and rabbitmq user and groups are created while installing JFrog Xray. But we need that to be owned and executed with our own user and group. How can we do that?
Why is it necessary to run the Jfrog Xray installation script under root or as sudo user?
The general answer for all of them is the same: Your points are valid and the current situation is suboptimal. The xray user shouldn't be hardcoded for our microservices and root shouldn't be required for installing 3rd-party services, especially if the user installs those by themselves. We are working on fixing all that.
I am with JFrog, the company behind jfrog-xray and artifactory, see my profile for details and links.
I'm trying to find any reference for installing a decryption certificate to BizTalk host using PowerShell. I was unable to find any reference.
Installing certificates doesn't relate to BizTalk.
You should install them in your machine server and afterwards choose it in BizTalk port.
As much as I know from my personal experience, you should install certificates using certmgr.msc.
If you need to use powershell to install - search for an installation script in the web.
After the installation you can choose it on BizTalk port under Security tab.