Binary provider has no content - artifactory

I had to take over responsiblity over Artifactory suddently (responsible employee left), I've never worked with it before, I've spent the day trying to learn the product and figure things out.
Problem Context:
Artifactory deployed on VM in Azure (ubuntu), mounted disk has artifactory deployed on it (OSS 6.0.2 rev 60002900)
Disk got full = application crashed.
I increased disk size, repartioned and re-mounted and the artifactory came up again - but now getting the following error message in the browser:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'b8135c33f045ab2cf45ae4d256148d66373a0c89'"
} ]
}
I have searched a bit and found various possible solutions.
This one:Artifactory has lost track of local artifacts
which seems to be the most promesing since the context of our issue is similar, but I don't see those paths - i.e. I do see the filestore and everything in it, but not other paths/files mentioned in the conversation.
I also found this: https://www.jfrog.com/jira/browse/RTFACT-6324 but again not finding the paths in our deployment.
To the best of my understanding it seems that if I somehow "reinstall" the filestore and/or database things should work?
Is there a clear manual, or something basic I'm missing? I'd like to avoid having to isntall everything from scratch.
Any help would be most appreciated as our entire Dev org is not sort of blocked and trying to workaround locally somehow until this is resolved.

I am a JFrog Support Engineer and we saw your issue, we will contact you on other channels in order to help you resolve this issue.
Edit:
After reaching out, we found that this issue was caused by a specific file that was corrupted/missing from your filestore, and after deleting this file and re-pulling it the issue was solved.
To further elaborate on this issue and what can cause it:
Artifactory implements a checksum based storage. All files deployed/cached in Artifactory are renamed to their checksum value and saved in the filestore, and Artifactory creates a pointer in the DataBase containing the name, checksum and some other properties of the file. This allows for a more efficient storage since all files are only saved once in the filestore but can have multiple pointers in the DataBase (in various locations inside Artifactory - different repositories or archives even).
When a file gets corrupted in the filestore or even deleted (without deleting it from Artifactory), this issue can manifest, since there is still a pointer to this file in Artifactory's DataBase but the binary itself does not exist in the filestore.
This issue can be caused by various causes such as connection issues with the NFS/S3/other types of storage, files being corrupted or deleted from the filestore, etc.
Another edit:
You can also use a user plugin called "filestoreIntegrity" that can go through all the pointers to files in your DataBase and checks if they exist in the filestore. This way you can know if there are corrupted or missing files and fix this issue.

Related

How to check all artifacts in Artifactory if their files exist on disk?

We are running a local installation of Artifactory Pro which contains around 1M artifacts. Recently, we tried to migrate from the embedded Derby DB to Postgres and switched back to Derby because of errors occurring during the migration.
After that, users reported missing files, mostly maven-metadata.xml but also at least one pom.xml. The files are missing on the filesystem.
The only way I can think of is to query the Artifactory API for all files, try to download them and check if they can be downloaded. Is there a better way to check all artifacts in Artifactory if they exist on the filesystem?
Welcome, Thomas! 👋🏻
Although that kind of errors don't happen in normal operation, data migration back and forth of a large number of artifacts can lead to those problems sometimes.
We have a user plugin find them, so check it out, looks like it is exactly what you need.

Unable to copy to Amazon S3 using Full Administrator access and Full S3 access

I had a perfectly working instance of a WP-CLI wordpress plugin to upload files to S3 using the AmazonS3FullAccess policy. I migrated servers, and the copy started failing. "Failed to copy or write".
I even included the Full Administrator access to the IAM policy just to see what's going on when there are no restrictions, and the copy is still failing. Any idea what might be wrong?
Things I have tried: ensure time (via NTPD synchronization) on the new server is correct. Cross check the environment: php version, etc. The application files are exactly the same. I also used the host files method to check the previous server and it is working well.
Solved the problem by creating new access keys. For some reason, it seems that migrating a server will make the old access keys stop working? Ah, well.
P.S. I also downgraded the policies right back, to only what the application needs.

Alfresco Community v.5.1 does not start and loading the index.html forever

For some unexplained reason I can't use alfresco from yesterday.
Let me tell you how happens.
First of all, I didn't change any conf file or something like that.
I started tomcat and postgre services and after that, I tried to load "localhost:8080/share" but it was loading forever.
I tried to check the logs files, but no use, too. There is no error messages, nothing unusual.
After that, I deleted alfresco and share folder inside the "webapps", just in case, but it failed, too.
Finally, I can't stop these services from service manager, cause I am at work and I have no access privilege.
My main concern is that I don't even know the cause of this issue, so I don't even know how to ask for help.
When you don't have permission to delete the folder(share + alfresco) and stopping the services also. Without stopping the services, you can't delete the complete files from alfresco and share folders.
You need to find the problem is in Alfresco share, Alfresco Repo or database or tomcat.
Check Tomcat
Type http://localhost:8080 and make sure Tomcat is running or not.
Check Database
Connect your database service from Service manager, via PgAdmin tool to check the database service is running or not.
Check Repo
http://localhost:8080/alfresco - It should display some basic information about Alfresco Repo otherwise, it is clearly decided the Alfresco Repo itself is failed.
Check Share
http://localhost:8080/share - It should display the login page, if everything works fine.
Logs
Check and share, alfresco.log, share.log, solr.log, catalina, tomcatstdout and tomcatstderror logs files. Definitely, some of the error information would have recorded any of these logs files.

What is the storage_full directory? Can I delete it or clean it up?

My company has a nexus installation that grew to a rather huge size on disk, thus I am in the process of cleaning it up. While doing so, I found a directory called storage_full in the sonatype-work/nexus directory, adjacent to the storage folder, which is not listed in the nexus directories documentation. Google finds nothing either. The folder is rather large and seems to be similar to storage in structure.
Questions:
What is the use of this directory?
Can I delete it, or clean it up? I'd rather not use trial and error to find out if it's important as any downtime of the service is unacceptable.
The nexus version is 2.11.2-03, but AFAIK it has been repeatedly updated and was running in the same work directory since about 2011 - could it be that this folder was left over from an older version?
Nexus has never had a directory called "storage_full" as part of it's setup, so I'm not sure what this is. Check to see of the local storage location of any of your repositories has been overridden to point to that location.

Error: ENOTEMPTY, directory not empty in Meteorjs

Error: ENOTEMPTY, directory not empty '/path/disk/folder/.meteor/local/build-garbage- qb4wp0/programs/ctl/packages'
I already looked over this website for this problem and known what are maybe the causes of this error and also tried them. I also tried their solutions and I can manage to always reset the project.
The problem is, whenever the project is reset, on the first run of the project,it will run smoothly and no errors will occur but after some moment or changes to my project like error checking, adding packages or changing some stuff... that error will occur.
I have no idea on how to fix this problem and my temporary solution is to always create another meteor project and put all my project files and also install all packages I used.
Badly need help.
I had this error when running Meteor.js on a Vagrant machine. For additional background, I had created a symbolic link for the MongoDB's db folder, since I had faced a locking issue (solution I used for that was described elsewhere).
Following that, my setup was as follows:
/vagrant/.meteor/local/db -> /home/vagrant/my_project_db (symbolic link)
That solved the problem I had with MongoDB's lock, but everytime any source file changed, meteor would crash with the same exception that you faced. Deleting files didn't help, neither did meteor reset.
Fortunately enough it was remedied by changing the folder structure to this:
/vagrant/.meteor/local -> /home/vagrant/my_project_local (symbolic link)
What I did was as simple as moving the Meteor.js's local folder out from the shared folder and only referencing to that with a symbolic link:
cd /vagrant/.meteor
mv local /home/vagrant/my_project_local
ln -s /home/vagrant/my_project_local local
In the end all is good. The error is long gone and feedback cycle is much shorter.
Try deleting the folder it tells you are issues. I think its trying to clean them but there's an unhandled type of situation (it has files in it and its using rm instead of a recursive one)
Remove
/media/Meteor/hash/.meteor/local/build-garbage-**
(Anything with build-garbade in the name). Also you might want to check whether your permissions are right this might have been caused initially with something to do with incorrectly set permissions, maybe you ran as sudo once? If you're on a Mac you could use repair disk permissions.

Resources