pyinotify for a single file and related error - pyinotify

I know pyinotify can be used to watch for events for all files within a specific directory (recursively). How can I watch for events (say a create event) for a single file only? Basically, I need to have my python code perform some action as soon as it detects for a file with a specific extension (say *.txt) is created.
I have tried looking up for this online but couldn't fine any useful document that guides on how to use pyinotify to explicitly monitor events for a single file as opposed to all files/sub-dirs within a directory.
For e.g. I am trying to watch a 'IN_CREATE' event for a file /tmp/test.txt but when I run my pyinotify script, I get the following error:
[Pyinotify ERROR] add_watch: cannot watch /tmp/test.txt (WD=-1)
One of the articles online indicated this could be due to limit on max_user_watches, so i tried to increase that limit (fs.inotify.max_user_watches) but no luck.
Any thoughts on why I would be getting this error message or anyone already knows details about this error?
Thanks.

The /tmp/test.txt exist already ?
If not then you need to monitor recursive the /tmp and ExcludeFilter the output

Related

How can you get a second copy of a running log file without deleting it?

I am trying to trouble shoot an issue on an application running on flavor of UNIX.
The default logging level puts out a reasonable amount of messages and does not affect performance.
But where there is an issue I need to change the logging level to verbose. Thousands of line in a second. Which effects performance.
Doing a delete of the trace log file would crash the application.
Being able to change back the logging level as quick as possible helps avoid a production performance hit.
The code is running in production so a performance hit is not good.
How can one create a second instance of the log for just the second or two that the problem is reproduced?
This would save having to copy the whole large file and then doing an edit to remove log entries not of concern for the problem at hand?
I have answered my own question because I have found this tip to be very useful at times and hope it helps others.
The steps below show how to quickly get a small section of the log to a separate file.
1) Navigate to the directory with the log file that you want to clear.
cd /logs
2) At the command prompt enter the following line. e.g. (include the ">")
> trace.log
This will clear the file trace.log without a file pointer changing.
3) Now quickly reproduce the error.
4) Quickly go back to the command line and copy the trace.log file to a new file.
cp trace.log snippet_of_trace.log
5) Now you have a much small log to analyze.

Execute Command within another directory

I am trying to execute a command from a different directory but I keep getting a "No such file or directory" response. I have been stuck for about 6 hours now and cannot figure it out. I am very new so please take it easy.
I created a directory (Learning) with a subdirectory (fileAsst), and two subdirectories (Earth, Galaxy) within "fileAsst." I am trying to execute a separate file to check to see if I have built the desired directory correctly.
I type in ~unix/bin/fileAsst-1 to try and execute to my directory. But it just is not working. Please help.
Learning/fileAsst/Earth/zaphod.txt
Learning/fileAsst/Galaxy/trillian.txt
~unix/bin/fileAsst-1 (what I'm trying to execute to check my Learning directory)
My guess, based upon your usage of the phrase "execute to my directory", you need to read something like this.
I cannot be sure but it appears as though you want to change your current directory to some other directory using the cd command. Having set your current working directory appropriately you wish to execute some command.
But a guess is only a guess and it's somewhat dangerous to give an answer when unfamiliar nouns and verbs are used in the question.

Acting on changes to a log file in R as they happen

Suppose a log file is being written to disk with one extra line appended to it every so often (by a process I have no control over).
I would like to know a clean way to have an R program "watch" the log file, and process a new line when it is written to the log file.
Any advice would by much appreciated.
You can use file.info to get the modification date of a file, just check every so often and take action is the modification date changes. Keeping track of how many lines have already been read will enable you to use scan or read.table to read only the new lines.
You could also delete or move the log file after it is read by your program. The external program will then create a new log file, I assume. Using file.exists you can check if the file has been recreated, and read it when needed. You then add the new data to the already existing data.
I would move the log file to an archive subfolder and read the logfiles as they are created.

.goutputstream-XXXXX - possible to relocate?

I've been trying to create a union file system for a college project. One of its features that differentiates it from unionfs is the fact that there are no copy-ups. This means that if a file is located in a certain branch, it will remain there even if it is written to.
But my current problem with that is the fact that .goutputstream-XXXXX are created, renamed, and deleted whenever a write operation occurs. This is actually OK if the file being written to is in the highest priority branch (i.e. the default branch where files can be created), but makes my kernel crash if I try to write to a file in a lower branch.
How do I deal with this? How can I rig it so that all .goutputstream-XXXXX files are written to only one location? These .goutputstream-XXXXX files seem to be intricately connected to the files they correspond too, and seem to work only the same directory as the file being written to.
I also noticed that .goutputstream-XXXXX files appear when a directory is read. What are they for, anyway?
There has been a bug submitted to the ubuntu launchpad in which the creation of .goutputstream-xxxxx files is discussed.
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/984785
From what i see now, these files are created when shutting down without preceding logout, but several other sources may occur, like evince or maybe gedit.
maybe lightdm has something to do with the creation of these files.
which distribution did you use?
maybe changing the distribution would help.
.goutputstream-XXXXX created by gedit and there is no simple way (menu or settings) to relocate them.

cleartool update error in Solaris Unix

I am working on a view created from the main code repository on a Solaris server. I have modified a part of the code on my view and now I wish to update the code in my view to have the latest code from the repository. However when I do
cleartool update .
from the current directory to update all the files in the current directory, some(not all) of the files do not get updated and the message I get is
Keeping hijacked object <filePath> - base no longer known.
I am very sure that I have not modified the directory structure in my view nor has it been modified on the server repository. One hack that I discovered is to move the files that could not be updated to a different filename(essentially meaning that files with original filename no longer exist on my view) and then run the update command. But I do not want to work this out one by one for all the files. This also means I will have to perform the merge myself.
Has someone encountered this problem before? Any advice will be highly appreciated.
Thanks in advance.
You should try a "cleartool update -overwrite" (see cleartool update), as it should force the update of all files, hijacked or not.
But this message, according to the IBM technote swg1PK94061, is the result of:
When you rename a directory in a snapshot view, updating the view will cause files in the to become hijacked.
Problem conclusion
Closing this APAR as No Plans To Fix (NPTF) because:
(a) to the simple workaround of deleting the local copy of renamed directories which will mitigate the snapshot view update problem and
(b) because of this issue's low relative priority with higher impact defects
So simply delete (or move) the directory you have rename, relaunch you update, and said directory (and its updated content) will be restored.
Thanks for your comment VonC. I did check out the link you mentioned, but I did not find it much useful as I had not renamed any directory. After spending the whole day yesterday, I figured out that I had modified some of the files previously without checking them out first. This made me to modify them forecfully as they were in read-only mode as they were not checked-out. This resulted in those files to become hijacked, and hence when I tried to update my view to look at all the modifications in the repository, it was unable to merge my modified file with that on the server as those files were modified without being checked out so the cleartool update was made to believe that the file is not modified(since it was not checked out) but actually it was. That was the fuss!! :)

Resources