Error: ENOTEMPTY, directory not empty '/path/disk/folder/.meteor/local/build-garbage- qb4wp0/programs/ctl/packages'
I already looked over this website for this problem and known what are maybe the causes of this error and also tried them. I also tried their solutions and I can manage to always reset the project.
The problem is, whenever the project is reset, on the first run of the project,it will run smoothly and no errors will occur but after some moment or changes to my project like error checking, adding packages or changing some stuff... that error will occur.
I have no idea on how to fix this problem and my temporary solution is to always create another meteor project and put all my project files and also install all packages I used.
Badly need help.
I had this error when running Meteor.js on a Vagrant machine. For additional background, I had created a symbolic link for the MongoDB's db folder, since I had faced a locking issue (solution I used for that was described elsewhere).
Following that, my setup was as follows:
/vagrant/.meteor/local/db -> /home/vagrant/my_project_db (symbolic link)
That solved the problem I had with MongoDB's lock, but everytime any source file changed, meteor would crash with the same exception that you faced. Deleting files didn't help, neither did meteor reset.
Fortunately enough it was remedied by changing the folder structure to this:
/vagrant/.meteor/local -> /home/vagrant/my_project_local (symbolic link)
What I did was as simple as moving the Meteor.js's local folder out from the shared folder and only referencing to that with a symbolic link:
cd /vagrant/.meteor
mv local /home/vagrant/my_project_local
ln -s /home/vagrant/my_project_local local
In the end all is good. The error is long gone and feedback cycle is much shorter.
Try deleting the folder it tells you are issues. I think its trying to clean them but there's an unhandled type of situation (it has files in it and its using rm instead of a recursive one)
Remove
/media/Meteor/hash/.meteor/local/build-garbage-**
(Anything with build-garbade in the name). Also you might want to check whether your permissions are right this might have been caused initially with something to do with incorrectly set permissions, maybe you ran as sudo once? If you're on a Mac you could use repair disk permissions.
Related
I had to take over responsiblity over Artifactory suddently (responsible employee left), I've never worked with it before, I've spent the day trying to learn the product and figure things out.
Problem Context:
Artifactory deployed on VM in Azure (ubuntu), mounted disk has artifactory deployed on it (OSS 6.0.2 rev 60002900)
Disk got full = application crashed.
I increased disk size, repartioned and re-mounted and the artifactory came up again - but now getting the following error message in the browser:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'b8135c33f045ab2cf45ae4d256148d66373a0c89'"
} ]
}
I have searched a bit and found various possible solutions.
This one:Artifactory has lost track of local artifacts
which seems to be the most promesing since the context of our issue is similar, but I don't see those paths - i.e. I do see the filestore and everything in it, but not other paths/files mentioned in the conversation.
I also found this: https://www.jfrog.com/jira/browse/RTFACT-6324 but again not finding the paths in our deployment.
To the best of my understanding it seems that if I somehow "reinstall" the filestore and/or database things should work?
Is there a clear manual, or something basic I'm missing? I'd like to avoid having to isntall everything from scratch.
Any help would be most appreciated as our entire Dev org is not sort of blocked and trying to workaround locally somehow until this is resolved.
I am a JFrog Support Engineer and we saw your issue, we will contact you on other channels in order to help you resolve this issue.
Edit:
After reaching out, we found that this issue was caused by a specific file that was corrupted/missing from your filestore, and after deleting this file and re-pulling it the issue was solved.
To further elaborate on this issue and what can cause it:
Artifactory implements a checksum based storage. All files deployed/cached in Artifactory are renamed to their checksum value and saved in the filestore, and Artifactory creates a pointer in the DataBase containing the name, checksum and some other properties of the file. This allows for a more efficient storage since all files are only saved once in the filestore but can have multiple pointers in the DataBase (in various locations inside Artifactory - different repositories or archives even).
When a file gets corrupted in the filestore or even deleted (without deleting it from Artifactory), this issue can manifest, since there is still a pointer to this file in Artifactory's DataBase but the binary itself does not exist in the filestore.
This issue can be caused by various causes such as connection issues with the NFS/S3/other types of storage, files being corrupted or deleted from the filestore, etc.
Another edit:
You can also use a user plugin called "filestoreIntegrity" that can go through all the pointers to files in your DataBase and checks if they exist in the filestore. This way you can know if there are corrupted or missing files and fix this issue.
I have a question. Accidentally I removed my .vimrc file but macvim is still open and all the settings are stored. Is there a way to load it?
I've never used it myself, but you could give the mkexrc command a try.
:mk :mkexrc
:mk[exrc] [file] Write current key mappings and changed options to
[file] (default ".exrc" in the current directory),
unless it already exists.
If you use plugins, you might want to look at your plugin manager's documentation to see if you can get a list of loaded plugin, which would get you halfway to recreating your plugin list.
If your question is how to restore perfect copy of .vimrc only using running vim, I do not know the answer. If your question is how to restore most of your lost .vimrc, please consider following options:
But, firstly, look at the trash :-) (quite obvious, it is not the case, is it?)
There should be also .gvimrc in your home directory. Usually people share the same settings between terminal and graphic versions of macvim, with maybe some minor exceptions, like the size of the window etc. So, if you have .gvimrc you can most probably restore your .vimrc.
The other option can be examining your command history using q: command. If you tried commands before you put them to .vimrc and the history is long enough, you can copy&paste them and thus restore .vimrc.
I am watching a directory recursively using QFileSystemWatcher. And I am not able to rename/delete the parent directory either programmatically or manually if its sub directories are being watched.
When trying to rename manually through system i get a message box saying "The action cannot be completed because the folder/ file in it is opened in another program" and on renaming programmatically it fails.
I got these similar bugs, but no resolution:
http://qt-project.org/forums/viewthread/10530
https://bugreports.qt-project.org/browse/QTBUG-7905
I am not watching . and .. as said in the above link, but still the directory is locked.
In case of programmatically renaming.. I tried a workaround:
1. Remove all the subdirectory paths from watcher before renaming the parent.
2. Rename parent.
3. Add subdirectory paths again.
But here too my program fails on first step. QFileSystemWatcher::removePath() returns false when trying to remove the subdirectory path, and QFileSystemWatcher::directories() show that directory in the paths being watched. Same as posted here https://bugreports.qt-project.org/browse/QTBUG-10846
Since step 1 fails here, step 2 also fails and i cannot rename the parent dir.
I am using Qt5.2.1 and Windows 7.
Kindly help me with a resolution.
This is a bug in QFileSystemWatcher as discussed here
After days of trying, I am finally able to find the solution of my problem by using Win32 API for watching directories on Windows platform. I wrote a blog post on How to use Win32 Api to monitor directory changes. I would like to share the link so it may help others who land up here to find the solution of the same problem.
Win32 API to monitor Directory Changes
I'm trying fossil for the first time, and messed it up within minutes. I created a repository, then apparently ran commands in the wrong folders etc., eventually deleted the test repository, in order to restart. (Somewhere I had read that fossil was "self contained", so I thought, deleting a repository file would be ok. What is the correct way to delete a fossil repository?)
Now, with almost every command I try (incl. "all rebuild"), I get the error "not a valid repository" with the deleted repository name.
What now?
According to this post:
The "not a valid repository" error only arises
when Fossil tries to measure the size of the repository file and sees that
either the file does not exist or else that the size of the file is less
than 1024 bytes. It does this by calling stat() on the file and looking at
the stat.st_size field.
it seems likely that you have a missing or truncated Fossil file. Make sure you've actually deleted the repository file, and that your filesystem has actually released the file handles. Fossil stores some respository information in ~/.fossil, so you may need to remove that too.
rm ~/.fossil
In egregious cases, you may want reboot after deleting this file, just to be sure you're working with a clean slate.
If you're still having problems, try creating a new repository file in a different directory. For example:
cd /tmp
fossil init foo.fsl
fossil open foo.fsl
fossil close
If all that goes smoothly, you'll have to hunt down whatever remnants of the repository are lurking. As long as the file handles are closed, there's no reason you shouldn't be able to delete foo.fsl (or whatever) and call it good.
I have just experienced the exact same problem on Windows. I too seem to have found a solution. Here is what I did. I cannot guarantee that it is a universal solution or even a good one. In:
C:\Users\mywindowsusername\AppData\Local
There was a file named _fossil and a directory/folder named VirtualStore. I deleted both. This seems to have removed all traces of the repository. Note that the repository was still in the "open" state, as with your case.
Edit: After experimenting further, it would appear that VirtualStore is a temporary directory that will disappear after committing (a .fossil file will then appear inside the targeted directory).
My mistake was to create a repository at the root and clone: fossil proceeded to clone the entire C drive. Probably a common newbie mistake.
I am working on a view created from the main code repository on a Solaris server. I have modified a part of the code on my view and now I wish to update the code in my view to have the latest code from the repository. However when I do
cleartool update .
from the current directory to update all the files in the current directory, some(not all) of the files do not get updated and the message I get is
Keeping hijacked object <filePath> - base no longer known.
I am very sure that I have not modified the directory structure in my view nor has it been modified on the server repository. One hack that I discovered is to move the files that could not be updated to a different filename(essentially meaning that files with original filename no longer exist on my view) and then run the update command. But I do not want to work this out one by one for all the files. This also means I will have to perform the merge myself.
Has someone encountered this problem before? Any advice will be highly appreciated.
Thanks in advance.
You should try a "cleartool update -overwrite" (see cleartool update), as it should force the update of all files, hijacked or not.
But this message, according to the IBM technote swg1PK94061, is the result of:
When you rename a directory in a snapshot view, updating the view will cause files in the to become hijacked.
Problem conclusion
Closing this APAR as No Plans To Fix (NPTF) because:
(a) to the simple workaround of deleting the local copy of renamed directories which will mitigate the snapshot view update problem and
(b) because of this issue's low relative priority with higher impact defects
So simply delete (or move) the directory you have rename, relaunch you update, and said directory (and its updated content) will be restored.
Thanks for your comment VonC. I did check out the link you mentioned, but I did not find it much useful as I had not renamed any directory. After spending the whole day yesterday, I figured out that I had modified some of the files previously without checking them out first. This made me to modify them forecfully as they were in read-only mode as they were not checked-out. This resulted in those files to become hijacked, and hence when I tried to update my view to look at all the modifications in the repository, it was unable to merge my modified file with that on the server as those files were modified without being checked out so the cleartool update was made to believe that the file is not modified(since it was not checked out) but actually it was. That was the fuss!! :)