I have https://packages.cloud.google.com/yum configured as a remote repo in Artifactory.
My repo file on Centos 7.3 looks like this:
[kubernetes]
name=kubernetes
baseurl=https://artifactory.company.com/artifactory/packages.cloud.google.com-yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
When I run yum install -y kubelet it prints this error:
e7a4403227dd24036f3b0615663a37 FAILED
https://artifactory.company.com/artifactory/packages.cloud.google.com-yum/repos/kubernetes-el7-x86_64/../../pool/e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1-0.x86_64.rpm: [Errno 14] HTTPS Error 500 - Internal Server Error
Trying other mirror.
I am pretty sure the problem is the relative path in the URL: kubernetes-el7-x86_64/../../pool
If I wget the URL it works fine because wget is resolving out the relative path before sending the HTTP request, but yum does not do this and Artifactory returns a 500 when you give it a url with ../ in it. Does anyone know how to enable relative URLs in Artifactory? Or how to get yum to resolve URLs before sending the requests?
I am running these versions:
Artifactory 5.2.0
Yum 3.4.3-150
Update: This is the HTTP response body from artifactory:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Path element cannot end with a dot: packages.cloud.google.com-yum-cache/repos/kubernetes-el7-x86_64/../"
} ]
}
The remote repository should be set with the following url in Artifactory
https://packages.cloud.google.com/yum/
The baseurl on the yum client should point on the repodata folder with the following:
baseurl=http://artifactory.company.com/artifactory/yum-remote/repos/kubernetes-el7-x86_64/
(The name of the remote repository is 'yum remote')
This should work without any further configuration from the Artifactory side.
The error you have mentioned regardin the relative path 'kubernetes-el7-x86_64/../../pool' happens during the caching of the artifact.
Artifactory cannot cache to a path which contains the '..' pattern so the request is failing.
It can be solved from Artifactory side with a user plugin.
If the path contains the '..' pattern then the plugin will modify the path where the artifact will cached so it will not include this pattern.
This is now redundant as the registry retrieves paths which doesn't include '..' in them.
Related
Error in jfrog-cli : The following error was received while trying to encrypt your password
The config command tried to encrypt your Artifactory password using an incorrect URL. Typically, it happens when the user provides JFrog platform URL as Artifactory URL and the opposite.
To fix it, you have 2 options:
Provide Artifactory URL using the --artifactory-url flag:
jfrog config add artifactory-server --artifactory-url="$ARTIFACTORY_URL" --user="$ARTIFACTORY_USER" --password="$ARTIFACTORY_PASSWORD" --interactive=false
Provide the base platform URL using --url flag:
jfrog config add artifactory-server --url="$JFROG_PLATFORM_URL" --user="$ARTIFACTORY_USER" --password="$ARTIFACTORY_PASSWORD" --interactive=false
For more information see JFrog Platform Configuration.
I am using an Artifactory server. I am trying to store a folder of third party files in my repo.
I was able to upload a zipfile and explode it so I have my files in MY_REPO/MY_FOLDER/
I am trying to recursively download the contents of the folder using:
jfrog rt dl --recursive MY_REPO/MY_FOLDER
I have enabled debug and I get:
[Info] Searching items to download...
[Debug] Searching Artifactory using AQL query:
items.find({"repo": "MY_REPO","path": {"$ne": "."},"$or": [{"$and":[{"path": {"$match": "MY_FOLDER"},"name": {"$match": "*"}}]},{"$and":[{"path": {"$match": "MY_FOLDER/*"},"name": {"$match": "*"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","property")
[Error] Artifactory response: 405 Method Not Allowed
{
"status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors. Please review the logs
I have tried variations of MY_FOLDER, MY_FOLDER/, MY_FOLDER/. and with/without recursive. I get the same error.
I know I have permissions because I can upload a file:
jfrog rt u TEST_FILE MY_FOLDER
succeeds just fine. But I can't turn around and pull it down:
jfrog rt dl MY_FOLDER/TEST_FILE
results in:
[Info] Searching items to download...
[Debug] Searching Artifactory using AQL query:
items.find({"repo": "MY_REPO","$or": [{"$and":[{"path": {"$match": "."},"name": {"$match": "MY_FILE"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","property")
[Error] Artifactory response: 405 Method Not Allowed
{
"status": "failure",
"totals": {
"success": 0,
"failure": 0
}
}
[Error] Download finished with errors. Please review the logs
I'm not certain what isn't working, there's nothing on StackOverflow, JFrog dox or elsewhere with issues using jfrog cli. :(
My jfrog-cli.conf looks like this:
{
"artifactory": [
{
"url": "https://SITE_NAME:PORT_NUM/artifactory/list/MY_REPO/",
"user": "USERNAME",
"serverId": "MY_REPO",
"isDefault": true,
"apiKey": "API_KEY_FROM_ARTIFACTORY"
}
],
"Version": "1"
}
UPDATED
I was running version 1.22.1 for Windows 64-bit. I am now using jfrog cli version 1.34.1 for Windows 64-bit. Per first comment I updated to the latest binary and tried running a search, both with and without MY_FOLDER:
jfrog rt s TEST_FILE and
jfrog rt s MY_FOLDER/TEST_FILE
Both return the same results. I enabled DEBUG and with the newer binary I actually seem to get two errors. The older binary just reported the 405 error. The newer one reports a 404 (Not Found) then reports the error as a 405 Method Not Allowed.
[Debug] Sending usage info...
[Info] Searching artifacts...
[Debug] Searching Artifactory using AQL query:
items.find({"$or":[{"$and":[{"repo":"TEST_FILE","path":{"$match":"*"},"name":{"$match":"*"}}]}]}).include("name","repo","path","actual_md5","actual_sha1","size","type","modified","created","property")
[Debug] Sending HTTP POST request to: https://SITE_NAME:8443/artifactory/list/MY_REPO/api/search/aql
[Debug] Sending HTTP GET request to: https://SITE_NAME:8443/artifactory/list/MY_REPO/api/system/version
[Debug] Artifactory response: 404 Not Found
{
"errors": [
{
"status": 404,
"message": "File not found."
}
]
}
[Error] Artifactory response: 405 Method Not Allowed
Which is still odd because I'm looking for a file I just uploaded. I even deleted the uploaded file and re-uploaded to verify the newer cli can still upload files. I tried a wildcard search to find all files but it returned the same error.
I can also see the file where it is supposed to be using the Artifactory web interface, and download the file and delete it. So Artifactory knows it's there and I have access to it. :-/
The url in your configuration should only contain the base url of Artifactory, i.e: "SITE_NAME:PORT_NUM/artifactory".
You should provide the directory path in the upload and download commands.
Use the ping command to verify your setup works, then progress from there.
BTW: The reason the upload works but download fails is by coincidence. The upload command just uses the upload REST API which expects a path (which you provided in your config), but the download command first sends a search request that does not expect a path, so it fails.
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
I am trying to set up a mock server using wireMock as a standalone process. I downloaded the jar file and executed the following command:
java -jar wiremock-standalone-2.23.2.jar --port 0
I had to dynamically determine a port because I am already using the default 8080 port for another program running on my machine. It gave me the port number 55142, but when I tried accessing that on the web, it gave me the following error:
HTTP ERROR 403
Problem accessing /__files/. Reason:
Forbidden
Powered by Jetty://
It's probably due to the fact that you just entered http://localhost:55142
and as there are no mappings in ./mappings directory and files in ./files directory (the same where you have your wiremock.jar file is located)
2019-06-04 00:10:58.890 Request was not matched as there were no stubs registered:
{
"url" : "/"
...
}
please try call with __admin endpoint to see if WireMock is working
http://localhost:55142/__admin
please see also docs here for more nice admin commands.
I'm trying to install a package on a old Fedora 20 virtual machine.
yum install<the_package_name> results in a failure with an HTTP 403 error:
http://download.fedoraproject.org/<...(truncated)...>/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden
My web-browser can't see anything at http://download.fedoraproject.org/pub/fedora/linux/updates/20 so I realize FC20 is no more supported (EOL) and its repository URL has changed. So I fix the baseurl in /etc/yum.repos.d/fedora.repo to look like this:
baseurl=http://archives.fedoraproject.org/<...(truncated)...>
I'm sure the URL is now correct, because I can download repomd.xml using curl or wget, and access it in my web browser...
But yum install <the_package_name> continues to fail with an HTTP 403 error! It can't access repomd.xml at the correct URL:
http://archives.fedoraproject.org/<...(truncated)...>/repomd.xml:
[Errno 14] HTTP Error 403 - Forbidden
Can you help me overcome this issue and install packages on this old Fedora (FC 20)?
Note 1: I'm working from behind a proxy (not my choice).
Note 2: Upgrading my Fedora 20 to Fedora 21 or 22 is not an option either.
Here are the suggestions (from Etan Reisner) that helped me solve the issue:
Check the proxy configuration in /etc/yum.conf
Check all YUM .repo files are using the up-to-date Fedora repo URL
Launch yum clean metadata to ensure YUM uses the updated .repo files contents
Try again yum install <the_package>
subscription-manager refresh did the trick on a RHEL 7.9 server box.
Created VPC Endpoint and allowed access to packages, repos and amazonlinux resources.
{"Version": "03-19-2021",
"Statement": [
{"Sid": "Amazon Linux AMI Repository Access",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [ "arn:aws:s3:::packages.*.amazonaws.com/*", "arn:aws:s3:::repo.*.amazonaws.com/*", "arn:aws:s3:::amazonlinux.*.amazonaws.com/*" ]
}]}
Refer to https://blog.saieva.com/category/aws/