A super basic udev rule that worked in the past doesnt run anymore - udev

I have got this rule at /usr/lib/udev/rules.d/99-test.rules:
ACTION=="add", SUBSYSTEM=="input", RUN+="touch /tmp/test"
It used to trigger stuff in RUN, now it does nothing anymore, it doesn't even create this file when I plug in a USB mouse. In a forum it was suggested to create a file in order to test if it runs at all.
cat /tmp/test
cat: /tmp/test: No such file or directory
What can I do? I have tried
sudo udevadm control --log-priority=debug
journalctl -f
but it doesn't print anything at all about my rule

yeah, it just randomly works now. Good luck with your own rules.
Maybe try sudo udevadm control --reload-rules
touching files didn't work by the way, journalctl -f says touch exits with code 1, even when touching in a folder with permissive rights.

Related

How to recover deleted iPython Notebooks

I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks!
This is bit of additional info on the answer by Thuener,
I did the following to recover my deleted .ipynb file.
The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium)
used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code)
Edit the binary file in vim (it doesn't open in gedit)
The python ipynb should file start with '{ "cells":' and
ends with '"nbformat": 4, "nbformat_minor": 2}'
remove everything outside these start and end points
Rename the file as .ipynb, open it in your jupyter-notebook, it works.
The "delete" functionality now sends the file to OS trash rather than permanently deleting it, see this PR: https://github.com/jupyter/notebook/pull/1968. So you can just open your Trash (wherever that is on your system) and restore it.
I think the easiest way (until developers handle this issue) to retrieve your Ipython history is to write them all into an empty file.
You need to check by the date you created your last script. Obviously, it is going to be the last part of your Ipython history.
To write your Ipython history into a file:
%history -g -f anyfilename
On linux:
I did the same error and I finally found the deleted file in the trash
/home/$USER/.local/share/Trash/files
If you deleted it through the OS (rm file.ipynb) then you can probably get it from ~/.ipython_checkpoints/ However, if you deleted it from the browser menu option, it is gone (by design!).
See discussion here: https://github.com/jupyter/notebook/issues/405
If you use PyCharm, you can do the following.
Open the Local History view.
Select the version you want to roll back to.
On the context menu of the selection, choose Revert.
Worked for me!
Source: here
For the unlucky ones like me, that delete some files on JuliaBox(jupyter for julia), there is a solution. I successifly recovery all my deleted files.
The browsers strore cache information about the pages you visit. You have to find your cache browser folder (in ubuntu with crhome was ~/.cache/google-chrome/Default/Cache) and grep for some text of your notebook in the binarys. Then, cut the text part of the file that is correspond to your ipynb.
https://groups.google.com/forum/#!searchin/julia-box/delete%7Csort:relevance/julia-box/Rt9LG9RldrU/3s_vVSrivJEJ
If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints.
As long as your Kernel is active, the code of each executed cell is stored in input history list. This will come in handy when you accidentally deleted a cell and want to retrieve its content.
_ih[-10:] *# code of the 10 most recently run cells (Even if those cells are deleted now)*
If you are running on Jupyterlab on linux like me. What I did is went into command prompt and went to my trash folder.
Trash directories on linux are typically
/home/$USER/.local/share/Trash
or
If you deleted something as root (e.g. deleted a file using Nautilus invoked via gksu), it is at
/root/.local/share/Trash
I ended up changing directories to /home/$USER/.local/share/Trash/files and my deleted notebook was there! depending on how you access your backend you could also try /home/jupyter/.local/share/Trash/
ps
If you are having issues changing directories from Trash to files due to permissions dont forget to become root:
sudo -i
then after sudo -i, go up with:
cd ..
and then
cd home/jupyter/.local/share/Trash
cd files
Best of luck,
Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete:
Figure out the drive your missing deleted file was stored on:
df /your/deleted/file/diretory/
Switch to a folder located on another you have write access to:
cd /your/alternate/location/
It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding):
sudo umount /dev/sdax
where sdax is the partition returned by your df command earlier
Use extundelete to restore your file:
sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax
If successful your recovered file will be located at:
/your/alternate/location/your/deleted/file/diretory/delted.file
I had the very problem and I ended up solving it this way. It might be the case for some of the folks.

Zsh tries to correct a command to a file

Zsh has a feature that lets it prompt for corrections to files in the current directory. Eg, if I say cd bar when I mean to say cd baz, then zsh will say: zsh: correct 'bar' to 'baz' [nyae]?
Normally, this works fine. However, sudo seems to mess things up. Specifically, suppose I want to version control my apache2 directory with git. I would type something like sudo git add . This is the correct command to run. However, zsh would prompt me with zsh: correct 'git' to '.git' [nyae]? as if it didn't know that git was a command, so it thought I was trying to refer to the .git folder.
Why is this happening? How can I get it to stop prompting me in those situations?
Thanks!
EDIT: It seems like zsh, by default, will consider all arguments to a command to be files or directories. However, I know that there is some functionality to extend this. For instance, if I type git sttab, then zsh will complete it to stash, status, or stripspace (with documentation on each of those). I would, ideally, like zsh to keep providing tips like these even with something like sudo (so, I would rather not do a nocorrect). How do I customize that functionality in zsh?
Either use nocorrect before the command itself, or define an alias
alias sudo="nocorrect sudo"

Exec with PHP-FPM on nginx (under chroot) returns nothing

I've created a nginx server in a chroot at /srv/http with php-fpm. Both services use the http user and it works fine. The problem comes when I try to run an exec command such as
echo shell_exec('/usr/bin/ls');
There is no output at all on the web page or in the errors. I've also tried
error_log(shell_exec('/usr/bin/ls');
and still nothing.
Things I've Tried or Know:
safe mode off
exec enabled
user is http (using phpinfo())
display_errors = on
error_reporting = E_ALL
sudo /usr/bin/chroot --userspec=http:http /srv/http ls works fine
Can create file and read from it using file_puts_content and fopen/fread
tried shell_exec,exec,system, and passthrough - nothing worked
tried appending 2>&1 to the end of the command and nothing
I've copied all the executables and libraries necessary over
all libraries, binaries, and everything under /srv/http/www (where the webpages are) have executable and read permissions
doc_root is www
As far as I know, everything works in the chroot, except shell commands through php-fpm. Anyone have any idea where I went wrong and how to fix it?
This may sound stupid but you must just copy /bin/sh (not /bin/bash!) to you chroot.
For example see this question: How do I change the shell for php's exec()
If you chroot to some directory, then this directory becomes the root for all your PHP scripts. That means, that if you execute /usr/bin/ls from within PHP, it will try to exectue /srv/http/usr/bin/ls instead.
You can copy the executable to that directory - but be aware of the security implications. If you copy critical system executables into the chrooted directory you basically bypass the positive effects of chroot.
I get no output for
echo shell_exec('/usr/bin/ls');
either. Presumably because ls isn't a file but a built-in command. Running:
echo shell_exec('ls');
outputs:
css demos favicon.ico images js path.php robots.txt routing.php test
which is the list of files in my root directory for the site.

Unix file permissions, WARNING: can't access

I'm trying to change the permissions of a few files that are used with a webpage I'm uploading to my site. I'm using the Unix command line to do it.
I've tried two commands:
chmod 755 index.html
chmod 644 index.html
But I get the message
chmod: WARNING: can't access index.html
after using these commands for some reason, and I have no idea why... initially I though it might be because I had the file open in a couple of programs (text editor and web browser), but I've closed these down, and I'm still getting the same problem... any idea why, and how I can set the permissions correctly so that the file will be viewable by anyone on the web, but only editable by me?
Cheers!
Here's a link that looks similar to your problem but it's on Solaris:
http://www.unix.com/solaris/45229-unable-chmod-file-directory.html
The solution is on pg 2 of this thread but the Cliff's note version of the solution is the person found that something else was mounting at that directory. It showed up when they ran
df -k /their_dir_location
Hope this helps.
another possible issue is ... if you are using solaris zones .. the directory visiable in more than one zone but only one zone has write abilities.

rsync error: failed to set times on "/foo/bar": Operation not permitted

I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.

Resources