tmux.conf file and path
Here is my tmux.conf file and despite several attempts it does not seem to be remapping anything. I also had this in ~/.tmux.conf initially (where it also did not work - /usr/local/etc/tmux.conf did not exist at this time either so it should have been used) and then I moved the contents into the tmux.conf to make it available across all users. Any help is greatly appreciated!
Related
After setting tmux to read a conf file:
bind r source-file ~/config/tmux/tmux.sh
I'm trying to change it to a different file, but I keep receiving the same error:
-bash: ~/.config/tmux/tmux.sh: No such file or directory
My first attempt was running bind r source-file again, but with a different target. I read all the manuals, tried to find the persistent confs for tmux, and even tried uninstall --purge. But, it keeps pointing to the same file.
I must be doing something utterly stupid...
I want to bind it to something like:
~/.config/tmux/tmux.conf
Side question: where does tmux store the current confs? If I was able to reinstall it and still see the same stuff, some env/files must remain somewhere...
I want to run Zsh without loading any of my .zshrc, Oh-my-zsh, and so on, just like if I had a fresh install without anything customized. (Like emacs -q.)
Are there any flags for this? Otherwise, can I set up some kind of "profile" for it?
Quoting from zsh manpages:
Commands are first read from /etc/zshenv; this cannot be overridden. Subsequent behaviour is modified by the RCS and GLOBAL_RCS options; the former affects all startup files, while the second only affects global startup files (those shown here with an path starting with a /). If one of the options is unset at any point, any subsequent startup file(s) of the corresponding type will not be read. It is also possible for a file in $ZDOTDIR to re-enable GLOBAL_RCS. Both RCS and GLOBAL_RCS are set by default.
[1] http://zsh.sourceforge.net/Doc/Release/Files.html
I guess you just want to disable your config files, so you should unset RCS option. This can be done either by running zsh -o NO_RCS or zsh -f / zsh --no-rcs.
I have to copy a big directory to my NAS using rsync, I would like to say to rsync only copy the files when source and destination are different to avoid to copy a files already copied.
Skipping identical files is the whole purpose why people use rsync. This is default behavior of rsync. Most of the time the only option you want to use is -a:
rsync -a -P <source> <dest>
The -P just means show progress and the -a means "archive" and that means "when copying files, try to make copy as identical as possible" (try to keep permissions, ownership, timestamps, etc.) but is also means "Only update files if you have to". It's like saying "make sure <dest> is an up-to-date backup of <source>".
However, by default rsync will already consider two files identical, if they have same file size and same last modification date. Of course, two files may also have same size and same last modification date and not be identical. So when running that command for the very first time and you are not sure which files may need update and which ones don't, try this:
rsync -a -c -P <source> <dest>
-c means don't rely just upon size and date, checksum every file and compare the checksums. Only if checkums are identical, consider files as identical. Note that rsync will not necessary checksum the whole file, big files are broken into smaller chunks and every chunk is checksumed separately as only chunks that have changed are transferred.
So even with checksuming you can save you a lot of time when copying over a network connection. It won't save you any time when copying locally because just copying everything is probably faster than checksuming everything. So a plain copy will always beat a checksuming rsync in speed when both, source and destination, are local drives. In that case use
cp -a -v <source> <dest>
or if your system doesn't know -a, use
cp -pPR -v <source> <dest>
that's identical to -a. Again, the -v is just to see some progress.
And I'd only use -c for the very first sync, after that, relying on file size and last modification date usually works very well for updating and it is a whole lot faster. It will work because if a file has been altered since the last sync, it will have a different last modification date and so by just comparing the dates rysnc will know that the file must be updated at the destination. Of course, that only works if your systems all have the correct date/time set and if you don't manipulate the last modification date of files and also don't forbid your system to update them.
If you want to skip files solely on presence, use this:
rsync -a -P --ignore-existing <source> <dest>
That's like telling rsync "If you see a file with the same name at the destination, always consider it to be identical and never update it".
Please note that if -a detects a file in <source> is different than a files in <dist>, whether this is determined by size and modification date or by checksumming, it will always update the file at <dest> to match then file at <source>. If multiple sources are syncing to the same destination, you might also want to add -u which means "in case two files are different, only update if the file at <source> has a newer last modification date than then file at <dest>"
Just as a general tip, if you type
man <command>
in a terminal, you will get a nice help page on most systems (Linux, MacOS X and UNIX systems), explaining you all the options in all detail. You can scroll up/down using arrow keys or page up/down and you can leave that view by hitting "q" for quit. E.g.
man rsync
I have iPython Notebook through Anaconda. I accidentally deleted an important notebook, and I can't seem to find it in trash (I don't think iPy Notebooks go to the trash).
Does anyone know how I can recover the notebook? I am using Mac OS X.
Thanks!
This is bit of additional info on the answer by Thuener,
I did the following to recover my deleted .ipynb file.
The cache is in ~/.cache/chromium/Default/Cache/ (I use chromium)
used grep in binary search mode, grep -a 'import math' (replace search string by a keyword specific in your code)
Edit the binary file in vim (it doesn't open in gedit)
The python ipynb should file start with '{ "cells":' and
ends with '"nbformat": 4, "nbformat_minor": 2}'
remove everything outside these start and end points
Rename the file as .ipynb, open it in your jupyter-notebook, it works.
The "delete" functionality now sends the file to OS trash rather than permanently deleting it, see this PR: https://github.com/jupyter/notebook/pull/1968. So you can just open your Trash (wherever that is on your system) and restore it.
I think the easiest way (until developers handle this issue) to retrieve your Ipython history is to write them all into an empty file.
You need to check by the date you created your last script. Obviously, it is going to be the last part of your Ipython history.
To write your Ipython history into a file:
%history -g -f anyfilename
On linux:
I did the same error and I finally found the deleted file in the trash
/home/$USER/.local/share/Trash/files
If you deleted it through the OS (rm file.ipynb) then you can probably get it from ~/.ipython_checkpoints/ However, if you deleted it from the browser menu option, it is gone (by design!).
See discussion here: https://github.com/jupyter/notebook/issues/405
If you use PyCharm, you can do the following.
Open the Local History view.
Select the version you want to roll back to.
On the context menu of the selection, choose Revert.
Worked for me!
Source: here
For the unlucky ones like me, that delete some files on JuliaBox(jupyter for julia), there is a solution. I successifly recovery all my deleted files.
The browsers strore cache information about the pages you visit. You have to find your cache browser folder (in ubuntu with crhome was ~/.cache/google-chrome/Default/Cache) and grep for some text of your notebook in the binarys. Then, cut the text part of the file that is correspond to your ipynb.
https://groups.google.com/forum/#!searchin/julia-box/delete%7Csort:relevance/julia-box/Rt9LG9RldrU/3s_vVSrivJEJ
If you're using windows, it sends it to the recycle bin, thankfully. Clearly, it's a good idea to make checkpoints.
As long as your Kernel is active, the code of each executed cell is stored in input history list. This will come in handy when you accidentally deleted a cell and want to retrieve its content.
_ih[-10:] *# code of the 10 most recently run cells (Even if those cells are deleted now)*
If you are running on Jupyterlab on linux like me. What I did is went into command prompt and went to my trash folder.
Trash directories on linux are typically
/home/$USER/.local/share/Trash
or
If you deleted something as root (e.g. deleted a file using Nautilus invoked via gksu), it is at
/root/.local/share/Trash
I ended up changing directories to /home/$USER/.local/share/Trash/files and my deleted notebook was there! depending on how you access your backend you could also try /home/jupyter/.local/share/Trash/
ps
If you are having issues changing directories from Trash to files due to permissions dont forget to become root:
sudo -i
then after sudo -i, go up with:
cd ..
and then
cd home/jupyter/.local/share/Trash
cd files
Best of luck,
Sadly my file was neither in the checkpoints directory, nor chromium's cache. Fortunately, I had an ext4 formatted file system and was able to recover my file using extundelete:
Figure out the drive your missing deleted file was stored on:
df /your/deleted/file/diretory/
Switch to a folder located on another you have write access to:
cd /your/alternate/location/
It is proffered to run extundlete on an unmounted partition. Thus, if your deleted file wasn't stored on the same drive as your operating system, it's recommended you unmount the partition of the deleted file (though you may want to ensure extundlete is already installed before proceeding):
sudo umount /dev/sdax
where sdax is the partition returned by your df command earlier
Use extundelete to restore your file:
sudo extundelete --restore-file /your/deleted/file/diretory/delted.file /dev/sdax
If successful your recovered file will be located at:
/your/alternate/location/your/deleted/file/diretory/delted.file
I had the very problem and I ended up solving it this way. It might be the case for some of the folks.
I used this command on my folder in unix:
chmod -R go-rwx *
in order to change permissions for group and others.
Doing this, many files turned green coloured, even simple data files.
Why did this happened?What does it mean?
Is it going to affect my files in a bad way?
They seem to work right now, but I'm concerned about their general functionality.
Thanks!
It is very unlikely that the command you mentioned would cause ls to print your files in green. When ls colors are enabled, executable files are printed in light green by default. Since chmod +R go-rwx only removes permissions, it cannot have caused any files to be marked as executable, and hence won't have made ls print them in green.
Instead, I believe the cause of this is a different command you must have entered, which accidentally marked all those files as executable. This is actually pretty common. Here is the typical scenario: You want to make a directory and all subdirectories readable and possible to enter for all users. So you do chmod -R a+rx top_directory. This works, but as a side effect you have also set the executable flag for all the normal files in all those directories too. This will make ls print them in green if colors are enabled, and it has happened to me several times. You can avoid this by doing chmod -R a+rX top_directory instead, which will only set the executable bit for directories.
To make your files stop being green, you must clear those executable bits. If none of the files in these directories are actually supposed to be executable, this is simple:
$ chmod -R a-x top_directory
$ chmod -R u+X top_directory
This will remove the executable flag for all files and directories, and then add it back for directories only (for the current user). But if some of the files are actually supposed to be executable, you will have to go through them and fix things manually, which can be tedious.
Having some files incorrectly marked as being executable is not a big problem. They will still function normally. It's just a bit messy, and they may show up in command tab completion if the current directory (.) is in your $PATH. So you can safely ignore this issue.
That is an ls functionality:
--color[=WHEN]
colorize the output. WHEN defaults to 'always' or can be 'never' or 'auto'. More info below
Using color to distinguish file types is disabled both by default and with --color=never. With --color=auto, ls emits color codes only when standard output is connected to a terminal. The LS_COLORS environment variable can change the settings. Use the dircolors command to set it.
You can try with ls --color=never and you won't see the colors anymore.
You can see your color configuration with dircolors -p.
This is the line where the executables files configuration resides:
# This is for files with execute permission:
EXEC 01;32
That's just to help you identify file types, so it's not affecting your files in any bad way.