Is is possible to reset notebook run information, only keep content of a jupyter notebook?
Because every time I run a notebook, git will think this file is changed. I don't remember if I change the content of this notebook(some time I open a notebook for days), so I can't just checkout this notebook file for git history. If I just commit notebook to git server no matter if I make "real" change of it, it makes my git log very messy.
There some execution information is not keep in .ipynb_checkpints directory:
for example:
another real messy content is the output of cells.
You could add the following line to your .gitignore to simply make Git ignore the Jupyter Notebook checkpoint files:
.ipynb_checkpoints
If you don't already have a .gitignore file, this is a file that tells Git which files it can safely ignore. You can create it by simply making a file named ".gitignore", then adding the line above to the file.
If you're using Windows, this is a little harder than it should be, since the ".gitignore" file doesn't technically have a file name (only a file extension). Here's how you can do it anyway.
Also note that if you have already added any .ipynb_checkpoints files to your Git repo, you need to manually remove them before this will work. The .gitignore file does not work on files that are already tracked.
so I've recently learned about sudoedit and how I can edit a file more safely than the standard "sudo vim".
the problem is now, when I'm in vim and "vsplit" or "tabnew" I open it as my user account (no root privileges)
sudoedit launches a separate instance of Vim, because it has to manage the lifecycle of the editing session; i.e. write back the edited temporary file with root priviledges. It cannot achieve that from a running Vim session.
However, there are plugins that achieve sudoedit-like functionality, for example the aptly named SudoEdit.
Maybe you just want a option to save file as sudo.
You can find mapings for write file as sudo or use tpope enuch plugin.
You will get :SudoWrite and :SudoEdit commands and couple more.
vim-enuch
I'm writing a script to create virtual hosts in Mamp Pro. I want them to be created and appear in the GUI next to the normal ones I've created manually through the GUI. I've found the following questions on SO:
Automatic Virtual Hosts with MAMP Pro?
Add MAMP Pro Vhosts with script
Here are my findings, so far:
I've found out that the hosts appearing in the MAMP Pro GUI are found in: ~/Library/Application\ Support/appsolute/MAMP\ PRO/settings3.plist; I've tried editing it but I can't seem to get the entries right with the command PlistBuddy -c 'print ":virtualHosts"' settings3.plist which says Print: Entry, ":virtualHosts", Does Not Exist
From the second question I've listed above, I found out that I can edit the httpd.conf files (one found in user library and one in the root library) through the GUI.
The hosts file including all of the IP addressing is in /private/etc/hosts
The questions are dead, even though I commented on the latest one asking for an update on how he solved his scripting problem in the end.
In the end, I can easily add the values into the hosts file and the vhosts.conf files to make the website work. My only problem is getting it to show up in the list with the other virtual hosts in the MAMP Pro GUI.
Update: After further investigation and experimentation, I realized the process in which the virtual hosts are created; when I first create a host through the GUI, the settings3.plist file gets updated, when I hit "save" to save the changes, the hosts and httpd.conf files are updated accordingly. I understand that settings3.plist can be converted to an XML through plutil -convert xml1 -o - settings3.plist > test.txt and then edit it and convert it back to binary through plutil -convert binary -o - test.txt > settings3.plist.
My problem with that is that, even though I got the gist of how the CP$UID works in the XML formats, I cannot create a script to undestand the concept, check for the position of the values through the list, and then put in the values accordingly. I even asked a question about it here: https://stackoverflow.com/q/33775025/1934402
The solution provided in Automatic Virtual Hosts with MAMP Pro? refers to MAMP PRO version 2.x where host configuration is saved in settings.plist which is an XML format property list file while in MAMP PRO version 3.x host settings are stored in settings3.plist which is a binary format property list file.
Even in this format you should be able to do:
/usr/libexec/PlistBuddy -c Print settings3.plist
and still get the contents of the file. Your problem arises from the fact that virtualHosts is no longer there as you will see by running the above command. The above command output is not very helpful for understanding the structure of your plist file so you could run:
plutil -convert xml1 -o - settings3.plist > ~/settings3.plist.xml
and then work out the structure of ~/settings3.plist.xml in order to find out which keys to use in PlistBuddy commands. It is a good idea to check the manual pages for plist and PlistBuddy. Do note that key names have changed and the structure is not that clear even in the xml file.
I hope this helped. I will investigate further into it and modify this answer if I have a full recipe for editing host details.
I'm trying to change the permissions of a few files that are used with a webpage I'm uploading to my site. I'm using the Unix command line to do it.
I've tried two commands:
chmod 755 index.html
chmod 644 index.html
But I get the message
chmod: WARNING: can't access index.html
after using these commands for some reason, and I have no idea why... initially I though it might be because I had the file open in a couple of programs (text editor and web browser), but I've closed these down, and I'm still getting the same problem... any idea why, and how I can set the permissions correctly so that the file will be viewable by anyone on the web, but only editable by me?
Cheers!
Here's a link that looks similar to your problem but it's on Solaris:
http://www.unix.com/solaris/45229-unable-chmod-file-directory.html
The solution is on pg 2 of this thread but the Cliff's note version of the solution is the person found that something else was mounting at that directory. It showed up when they ran
df -k /their_dir_location
Hope this helps.
another possible issue is ... if you are using solaris zones .. the directory visiable in more than one zone but only one zone has write abilities.
I'm getting a confusing error from rsync and the initial things I'm finding from web searches (as well as all the usual chmod'ing) are not solving it:
rsync: failed to set times on "/foo/bar": Operation not permitted (1)
rsync error: some files could not be transferred (code 23)
at /SourceCache/rsync/rsync-35.2/rsync/main.c(992) [sender=2.6.9]
It seems to be working despite that error, but it would be nice to get rid of that.
If /foo/bar is on NFS (or possibly some FUSE filesystem), that might be the problem.
Either way, adding -O / --omit-dir-times to your command line will avoid it trying to set modification times on directories.
The issue is probably due to /foo/bar not being owned by the writing process on a remote darwin (OS X) system.
A solution to the issue is to set adequate owner on the remote site.
Since this answer has been voted, and therefore has been hopefully useful to someone, I'm extending it to make it clearer.
The reason why this happens is that rsync is probably trying to set an arbitrary modification time (mtime) when copying files.
In order to do this darwin's system utime() function requires that the writing process effective uid is either the same as the file uid or super user's one, see opengroup utime's page.
Check this discussion on rsync mailing list as reference.
As #racl101 has commented on an answer, this problem might be related to the folder owner. The rsync command should be done by the same user as the folder owner's one. If it's not the same, you can change it.
chown -R userCorrect /remote/path/to/foo/bar
I had the same problem. For me the solution is to delete the remote file and let rsync create again.
The problem in my case was that the "receiver mountpoint" was incorrectly mounted. It was in read-only mode (for some extrange reason).
It looked like rsync was copying the files, but it was not.
I checked my fstab file and changed mount options to default, re-mount file system and execute rsync again. All fine then.
I've seen that problem when I'm writing to a filesystem which doesn't (properly) handle times -- I think SMB shares or FAT or something.
What is your target filesystem?
This happened to me on a partition of type xfs (rw,relatime,seclabel,attr2,inode64,noquota), where the directories where owned by another user in a group we were both members of. The group membership was already established before login, and the whole directory structure was group-writeable. I had manually run sudo chown -R otheruser.group directory and sudo chmod -R g+rw directory to confirm this.
I still have no idea why it didn't work originally, but taking ownership with sudo chown -R myuser.group directory fixed it. Perhaps SELinux-related?
I came across this problem as well and the issue I was having was a permissions issue with the root folder that contained the files I was trying to send over. I don't care about that root folder being included with rsync I just care what's in it. The error was coming from my command where I need to specify an additional / at the end. If you do not have that trailing slash rsync will attempt to set times the folder.
Example:
This will attempt to set times on html
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html
This will not
rsync /var/www/html/ ubuntu#xxx.xxx.xxx.xxx:html/
This error might also pop-up if you run the rsync process for files that are not recently modified in the source or destination...because it cant set the time for the recently modified files.
I ran into this error trying to fix timestamps on a new MacOS Monterey, after the Migration Assistant decided to set all of them to the time the copy operation occurred, instead of the original file's.
anddam's answer did not help me, as the remote user used in the rsync command did match the directories and files owner.
After further research, I realised that I had no access to the Mac's Documents directory over SSH (error ls: Documents: Operation not permitted).
I managed to fix the problem by opening System Preferences on the Mac, then selecting Security & Privacy, go to Privacy tab select Full Disk Access and check the box next to sshd-keygen-wrapper.
It could be that you don't have privileges to some of the files. From an administrator account, try "sudo rsync -av " Alternately, enable the root account and sign in as root. That should allow you to completely hose your system and brute force your rsync! ;-) I'm not sure if the above mentioned --extended-attributes will help, but I threw it in too, just for good measure.