I'm using Artifactory OSS 4.1.0 and Java 1.8.0_51.
When I try to download one of my local artifacts from the Artifactory web interface, I get this:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'bab1c4e18f6c5edfb65b2503a388dea2fed0deb8'"
} ]
}
But I found this file in my Artifactory data area: ./files/ba/bab1c4e18f6c5edfb65b2503a388dea2fed0deb8, and upon further inspection it is the WAR file I tried to download.
I've come across other people on the web with the same error message, but their issue was with caching external artifacts, and their workaround was to delete the cache.
Does anyone have an idea what's going on and how I can fix the problem? BTW, I did stop and restart our Artifactory server, but with no noticeable difference.
Artifactory doesn't store the binaries under ./files directory, but under $ARTIFACTORY_HOME/data/filestore.
It looks like you had a symbolic link from the files directory to the filestore directory and this link was deleted.
Related
I am trying to download the packages of grafana from github releases using our Jfrog Artifactory instance.
The github url is :- https://github.com/grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz
Inorder to achieve this , I create a generic typr of remote repo (named as grafana-generic
) in jfrog Artifactory by pointing the remote url to https://github.com.
I referred this solution forn stackoverflow, But it didnt help.
The url which I tried to download the package is as below
https://myrepo/artifactory/grafana-generic/grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz
Error is getting as below
{
"errors": [
{
"status": 404,
"message": "Item grafana-generic-cache:grafana/k6/releases/download/v0.42.0/k6-v0.42.0-linux-amd64.tar.gz does not exist"
}
]
}
If you want to fetch a VCS Release you can use the Download a VCS Release REST API command
I am setting up Nexus3 repository as a remote repository in Artifactory.
But when I update the Nexus3 repo URL(https://domainname/repository/reponame/) & necessary credentials for authentication in the admin section, during testing i am getting,
Connection failed: Error 404.
I have tried providing the rest URL(http://domainname/service/rest/repository/browse/reponame).
In this case, connection to the nexus server establishes successfully & able to see the directory structure for the remote repo in the Artifacts section , but could not find the artifacts inside & seeing below output/error,
{ "errors" : [ {
"status" : 404,
"message" : "Couldn't find item: XXXX:XXXXXXX" } ]
You have to ignore the error while saving. This is because of a header mismatch.
Repo path should be like:
https://<host>:<port>/repository/reponame
Once you save the repository and try to download, it will work.
The test is failing when creating an Artifactory remote repository which point to a hosted Nexus repository, since Artifactory is using a HEAD request for checking the remote repository and for some reason Nexus will return a 404 (while returning a 200 when the same request is sent using the GET method). This behavior does not happen with Nexus group repositories.
Scenario:
Code is in Nuget packages, which get pushed to a local Artifactory
respository in Location 1 (DEV/QA). Each code change results in a new version number (i.e. different package, not overwriting existing package).
Artifactory in location 2 (Main colo) has a remote repository that replicates these packages from Location 1.
Artifactories in Locations 3, 4 and 5 (Production colos around the
world) have remote repositories that replicate these packages from
Location 2.
When code is deployed to production SaaS servers, the packages are pulled down from Artifactories in locations 3, 4 and 5.
Problem:
Intermittent problem where packages in remote repositories become expired and give a 404 error when you attempt to download them.
{
"errors" : [ {
"status" : 404,
"message" : "Resource has expired"
} ]
}
Request:
Is there some setting I can set somewhere that will cause resources to NEVER expire? (We have a cap of 100 versions in the original location, to control growth and the downstream remote repositories are set to delete packages when the origin deletes them)
I am unable to find anything in the documentation that even hints this is possible.
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.
I followed the upgrade notes to upgrade from Artifactory 2.6.6 to 3.4.2, I reimported the artifacts but after that, artifact resolution often fails with 404.
Even previously deployed artifacts can not be resolved. The deploying Jenkins job runs fine, but when I open the URL from the log, I get that message:
{
"errors" : [ {
"status" : 404,
"message" : "Artifact not found: my/group/myapp/0.9.13-SNAPSHOT/myapp-0.9.13-SNAPSHOT.pom"
} ]
}
I also tried to export/import the whole system after upgrade, but nothing changed.
Does anyone know how to fix this?
I found out that the issue is related to the unique snapshot behaviour of Maven 3. In fact, there is no myapp-0.9.13-SNAPSHOT.pom, but one with a timestamp. The Rest API does not cover that and not all Jenkins plugins have support for that.