Related
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks!
This is bit of additional info on the answer by Thuener,
I did the following to recover my deleted .ipynb file.
The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium)
used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code)
Edit the binary file in vim (it doesn't open in gedit)
The python ipynb should file start with '{ "cells":' and
ends with '"nbformat": 4, "nbformat_minor": 2}'
remove everything outside these start and end points
Rename the file as .ipynb, open it in your jupyter-notebook, it works.
The "delete" functionality now sends the file to OS trash rather than permanently deleting it, see this PR: https://github.com/jupyter/notebook/pull/1968. So you can just open your Trash (wherever that is on your system) and restore it.
I think the easiest way (until developers handle this issue) to retrieve your Ipython history is to write them all into an empty file.
You need to check by the date you created your last script. Obviously, it is going to be the last part of your Ipython history.
To write your Ipython history into a file:
%history -g -f anyfilename
On linux:
I did the same error and I finally found the deleted file in the trash
/home/$USER/.local/share/Trash/files
If you deleted it through the OS (rm file.ipynb) then you can probably get it from ~/.ipython_checkpoints/ However, if you deleted it from the browser menu option, it is gone (by design!).
See discussion here: https://github.com/jupyter/notebook/issues/405
If you use PyCharm, you can do the following.
Open the Local History view.
Select the version you want to roll back to.
On the context menu of the selection, choose Revert.
Worked for me!
Source: here
For the unlucky ones like me, that delete some files on JuliaBox(jupyter for julia), there is a solution. I successifly recovery all my deleted files.
The browsers strore cache information about the pages you visit. You have to find your cache browser folder (in ubuntu with crhome was ~/.cache/google-chrome/Default/Cache) and grep for some text of your notebook in the binarys. Then, cut the text part of the file that is correspond to your ipynb.
https://groups.google.com/forum/#!searchin/julia-box/delete%7Csort:relevance/julia-box/Rt9LG9RldrU/3s_vVSrivJEJ
If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints.
As long as your Kernel is active, the code of each executed cell is stored in input history list. This will come in handy when you accidentally deleted a cell and want to retrieve its content.
_ih[-10:] *# code of the 10 most recently run cells (Even if those cells are deleted now)*
If you are running on Jupyterlab on linux like me. What I did is went into command prompt and went to my trash folder.
Trash directories on linux are typically
/home/$USER/.local/share/Trash
or
If you deleted something as root (e.g. deleted a file using Nautilus invoked via gksu), it is at
/root/.local/share/Trash
I ended up changing directories to /home/$USER/.local/share/Trash/files and my deleted notebook was there! depending on how you access your backend you could also try /home/jupyter/.local/share/Trash/
ps
If you are having issues changing directories from Trash to files due to permissions dont forget to become root:
sudo -i
then after sudo -i, go up with:
cd ..
and then
cd home/jupyter/.local/share/Trash
cd files
Best of luck,
Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete:
Figure out the drive your missing deleted file was stored on:
df /your/deleted/file/diretory/
Switch to a folder located on another you have write access to:
cd /your/alternate/location/
It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding):
sudo umount /dev/sdax
where sdax is the partition returned by your df command earlier
Use extundelete to restore your file:
sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax
If successful your recovered file will be located at:
/your/alternate/location/your/deleted/file/diretory/delted.file
I had the very problem and I ended up solving it this way. It might be the case for some of the folks.
So I setup nginx and uwsgi using this tutorial: http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html
I finished the tutorial completely but for some reason only my images are not being loaded on the page when I run the command...
uwsgi --ini exchange_uwsgi.ini
where exchange_uwsgi.ini is my initialization file for specifying what socket I run on, wheres my project, wheres my virtualenv etc...
Just to re-iterate, the only things not showing up are my images and my images and css are all stored in one folder.
Any reason why this might happens?
Thanks
I fixed the problem.
Make sure to check the permissions on all of your static files. Only 2 images of mine were not loading and they were the only ones with incorrect permissions.
On Linux, first go to the folder with all your static files in the command window, type "ls -l" for list items with the long parameter so you can view permissions.
I set my permission on each file to -rw-rw-r--
Edit: In order to change permissions look into the command "chmod"
I'm new to Autosys and I'm using the WCC front end in order to run autosys jobs. I also have access to the terminal if the answer proves to only work in the terminal
I was wondering how to change the directory before running a job. At the moment, Autosys searches its root directory for a batch file buit I want it to point to C:/scripts. IS there any way to define this as the starting directory or am i stuck leaving script files in the root directory of Autosys?
Thank you for your help.
Full path to the script should stand in the "command" section, e.g.:
insert_job: my_job job_type: c
command: C\:\Scripts\script1.bat
machine: appserver.domain.com
Alternative solution is to add C:\Scripts to the PATH variable on your app server.
When attempting to run R, I get this error:
Fatal error: cannot mkdir R_TempDir
I found two possible fixes for this problem by googling around. The first was to ensure my tmp directory didn't contain a load of subdirectories - it doesn't and it's virtually empty. The second fix was to ensure that TMP, TMPDIR, and R_USER in my environment weren't set to non-existent paths - I didn't even have these set. Therefore, I created a tmp directory in my home directory and added it's path to TMP in my environment. I was able to run R once and then I got the fatal error again. Nothing was in the TMP directory that I set in my environment. Does anyone know what else I can try? Thanks.
Dirk is right, but misses a point: If /tmp is full, you can't create subdirectories there. Try
df /tmp
I just hit this on a shared server, where /tmp is mounted on it's own partition, and is shared by many users. In this particular case, you can't really see who's fault it is, because permissions restrict you seeing who is filling up the tmp partition. Basically have to ask the sys admins to figure it out.
Your default temporary directory appears to have the wrong permissions. Here I have
$ ls -ld /tmp
drwxrwxrwt 22 root root 4096 2011-06-10 09:17 /tmp
The key part is 'everybody' can read or write. You need that too. It certainly can contain subdirectories.
Are you running something like AppArmor or SE Linux?
Edit 2011-07-21: As someone just deemed it necessary to downvote this answer -- help(tempfile) is very clear on what values tmpdir (the default directory for temporary files or directories) tries:
By default, 'tmpdir' will be the directory given by 'tempdir()'. This
will be a subdirectory of the temporary directory found by the
following rule. The environment variables 'TMPDIR', 'TMP' and 'TEMP'
are checked in turn and the first found which points to a writable
directory is used: if none succeeds '/tmp' is used.
So my money is on checking those three environment variables. But AppArmor and SELinux have shown to be an issue too on some distributions.
Go to your user directory and create a file called .Renviron and add the following line, save it and reopen RStudio or Rgui or Rterm
TMP = '<path to folder where Everyone has full control>'
This worked with me on Windows 7
If you are running one of the rocker docker images (e.g., rocker/verse), you need to map a local directory to the /tmp directory in the container. For example,
docker run --rm -v ${PWD}/tmp:/tmp -p 8787:8787 -e PASSWORD=password rocker/verse:4.0.4
where ${PWD} for me is ~/devProjs/r, and I created a /tmp directory inside it, so that the container's /tmp is mapped to my ~/devProjs/r/tmp directory.
Just had this issue and finally solved it. Simply a windows permission issue. Go to environment variables and find the location of the temp folders. Then right click on the folder > properties > security > advanced > change everyone to full control > tick "replace all child object permission entries with inheritable permission entries from this object" > Ok > ok.
This will also happen when your computer is completely, utterly out of space. Currently, my Mac has 0 kb free and it's causing this error. Freeing up some space solved the problem.
Check for the user account with which you are launching the RStudio with. Now u check the TMP(System Environment variable) for its location. If the user who is launching RStudio has Write access for those directories you will not face this issue. Being said that you are facing this issue, all you have to do is to change the permissions for that user to have write access on those directories.
Running R on CentOS system and had the same issue. I had to remove all R folders from the tmp directory. Usually all R folders will be in the form of /tmp/Rtmp*****
so i tried to delete the folders from /tmp by running the below.
CD into /tmp directory and run rm -rf Rtmp*
R shell Worked for me afterwards
I had this issue, solution was slightly different. I run R on a linux server - it turned out for me R had made a whole load of tempdirs when running jobs with cron that had hung and not been cleaned up, clogging up the root /tmp directory with ~300 RtmpXXXXXX folders.
Using terminal access, I navigated to the /tmp folder did a recursive find/rm - deleting all of them using this command:
find . -type d -name 'Rtmp*' -exec rm -r -v {} \;
After this, Rstudio took a while to load up, but was once again happy and my scripts began to run again.
You will need the appropriate admin rights for this solution. And always be careful when running rm -r, especially with a find command, as it's easy to remove things unexpectedly.
When it comes to deleting tmp files, make sure that the tmp files are in the server or in local.
If its in the remote, 1st check for the df /tmp in the server or in the remote to see who uses more storage.
Then use rm(file_name)` to remove the files which cause the blocking.
If its in the remote, then use rm /tmp/(file_name)..
MOreover, you can also refer to https://support.rstudio.com/hc/en-us/articles/218730228-Resetting-a-user-s-state-on-RStudio-Server
I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.