I have a setup I'm pretty happy with for sharing my configuration files between machines, but I find that I often want to search back in zsh (Ctrl + R) and I can't remember which machine I typed the command on. Ideally I'd like this to search a canonical de-duped list of previous commands from any of my machines. Given that I sometimes work on these machines concurrently and without a network connection, what's a sensible way of merging history files and keeping everything in sync?
I wrote this plugin for oh-my-zsh:
An Oh My Zsh plugin for GPG encrypted, Internet synchronized Zsh history using Git
https://github.com/wulfgarpro/history-sync
Hmm, I guess you could identify your accounts somehow... then you could do the following:
modify your .zshrc to catenate the history of the OTHER accounts from dropbox or SCM with the current history IF this is the first zsh launched on the current computer
then sort the entries with -n (sort by timestamp)
I guess zsh would remove the duplicates if you have setopt HIST_SAVE_NO_DUPS
Then you need to trap shell exit, and copy the existing history to the dropbox/SCM/whatever shared place.
You could have a NFS mount point for the involved machines with the .zsh_history in it. Each machine would need the HISTFILE env var set to that file path in the NFS.
HISTFILE env var is compulsory, since ZSH does not accept a symlink, and it would be replaced by a file using the last version of the symlink target.
I have not tested what is said above, since my setup is between VM's using OSX and Parallels shared folders. Hence, with NFS the integrity (order?) of the file could be something to consider, also VS the Dropbox solution.
Not sure, something like https://gist.github.com/elim/f77e3e9f06b6f8a5e788 could be used to fix versions to be merged...
I use the command below:
ssh username#example.com cat ~/.zsh/.zhistory | cat ~/.zsh/.zhistory - | sort | uniq | tee ~/.zsh/.zhistory | ssh username#example.com cat \> ~/.zsh/.zhistory
Related
Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.
I received a network traffic capture that is partitioned is several hundred of small .pcap files with the following format:
name.pcap#
where # is a numeration from 1 to 630.
Something like this:
name.pcap1,name.pcap2,name.pcap3,...,name.pcap630
I know that all of them are from one continue capture but it seems that was partitioned.
I don't have a lot experience working with wireshark, and this type of files are new for me. I don't know how to read them as one file.
I was wondering what can I do to resemble all of them in just one file?
Many thanks in advance,
I was wondering what can I do to resemble all of them in just one file?
At least with the current version of Wireshark, if you:
start Wireshark without opening a file - just directly start the application;
select all 630 of the files in Windows Explorer/File Explorer (Windows), the Finder (macOS), or whatever file manager you are using in the GUI (other UN*Xes - Linux, *BSD, Solaris, AIX, etc.);
drag them into Wireshark;
Wireshark should read all the files and combine them into a single file, showing you all the packets.
I tested this on macOS; I have not tested it on Windows or, for example, Ubuntu, but I suspect it would work.
Note that you must select all the files and drag them all in one operation; if you try to drag them one at a time, they won't be combined, Wireshark will just close the currently open one and open the file that you're dragging and dropping.
Alternatively, Wireshark includes mergecap, which is a tool that "merges two or more capture files into one".
It is a command-line tool, so you will have to use it on the command line (UN*X - Linux, macOS, *BSD, Solaris, AIX, etc. - or Windows).
The command would be something such as
mergecap name.pcap* -w merged.pcap
(on UN*Xs) or
mergecap.exe name.pcap* -w merged.pcap
if you were to run it while your command-line shell is in the directory (folder) in which the files are stored. This command will put a new file, named merged.pcap, in that directory.
You will have to make sure that the directory containing mergecap is in your command-line shell's search path, or will have to type the full pathname rather than just mergecap. (The .exe may not be necessary on Windows with some command-line shells, but it doesn't hurt, and may be necessary with other command-line shells.)
On most UN*Xes, mergecap will probably be in /usr/local/bin or /usr/bin, both of which are in the command-line shell search path by default.
On macOS, mergecap will probably be in /Applications/Wireshark.app/Contents/MacOS/; however, if, when you installed Wireshark, you chose to install the Wireshark command-line tools, it will also be in /usr/local/bin, which is, again, in the command-line shell search path by default.
On Windows, mergecap will probably be in C:\Windows\Program Files\Wireshark. That is probably not in the command-line shell search path by default. If you don't put it in your command-line shell search path, you will have to run a command such as
`"C:\Windows\Program Files\Wireshark\mergecap.exe" name.pcap* -w merged.pcap
Adding it to the command-line shell search path is a painful process, so it's probably easier just to use the full path.
You must include the quote characters (because there's a space in "Program Files").
Consider following:
$ cd /home/mydir
$ jupyter notebook --port=8888
In plain English, I am running jupyter server from /home/mydir directory.
Is there a simple way to get this directory from within a notebook regardless if it's a R notebook or a Python notebook or whatever? Maybe there is some magic command or variable?
NOTE: getwd() is not an answer as it returns directory of a current notebook but not the jupyter server root.
I have a similar problem and found your post, although I don't see a viable solution yet. Eventually I did found a solution, although it works only because I only care about Linux and I only care about Python. The solution is this magic line:
J_ROOT = os.readlink('/proc/%s/cwd' % os.environ['JPY_PARENT_PID'])
(I put it in a module in my PYTHONPATH so that I can easily use it in any Python notebook.) See if it is good for you.
Remember that your iPtyhon is just a Python module, so you can execute any valid Python code in a cell. So, if you started your notebook and haven't executed any directory changes in your code, you should be able to retrieve your cwd with the following in a cell:
import os
os.getcwd()
But furthermore, you can execute shell commands in cells, so you can retrieve other information in the cell. For example:
!which jupyter
should give you the path to your jupyter executable.
Which then leads you to running something like:
!jupyter --paths
which should give you something similar to:
config:
/Users/yourdir/.jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/yourdir/Library/Jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/yourdir/Library/Jupyter/runtime
Frankly I'm surprised that all this time later there is still no built-in way to do this. I have used Isaac To's solution on Linux but recently had to make a jupyter notebook portable to Windows as well. Simply using os.getcwd() is fragile because a cell using it to set your JUPYTER_ROOT_DIRECTORY can potentially be called again, after you have changed your working directory.
Here is what I came up with:
try:
JUPYTER_ROOT_DIRECTORY
except NameError:
JUPYTER_ROOT_DIRECTORY = os.getcwd()
I put it in one of the first couple cells with the initial import statements. If the cell gets called again, it will not re-set the variable value because the variable exists and does not throw an exception.
It should be noted that unlike Isaac To's solution it sets the value to the directory the current .ipynb was run from, which is not necessarily the same directory as the top-level dir the user can access in the left hand file pane.
My suggestion is to use an intuitive approach.
Create a new folder within the Jupyter environment with a very unique name, for example, T246813579.
You can now locate the Jupyter working path by searching in your file explorer. For example, you can use the Windows Explorer in order to locate your new folder.
The expected result should look something like this:
C:\Users\my_user_name\JupyterHome\T246813579
The answer from #Isaac works well for Linux, but not all systems have /proc. For a solution that works on macOS and Linux, we can use shell commands, taking advantage of the ! shell assignment syntax in Jupyter:
import os
JPY_ROOT = ! lsof -a -p {os.environ['JPY_PARENT_PID']} -d cwd -F n | tail -1 | cut -c 2-
JPY_ROOT = JPY_ROOT[0]
print(JPY_ROOT) # prints Jupyter's dir
Explanation:
Get the process ID (pid) of the current jupyter instance with os.environ['JPY_PARENT_PID']
Call lsof to list the process's open files, keeping only the current working dir (cwd)
Parse the output of lsof using tail and cut to keep just the directory name we want
The ! command returns a list, here having only one element
Alternate Version
To save the os import, we could also use shell commands to get the PID. We could also do the subsequent string wrangling in python, rather than calling tail and cut from the shell, as:
JPY_ROOT = ! lsof -a -p $(printenv | grep JPY_PARENT_PID | cut -d '=' -f 2) -d cwd -F n
JPY_ROOT = JPY_ROOT[2][1:]
I'm trying to wrap my head around this: on Windows, I use cmder (a wrapper around ConEmu) which improves on the bare cmd.exe experience (a lot) but can also host other shells like PowerShell or Git Bash. I'd like to go more "unix-y" but still well integrated with my Windows tools. Git Bash strikes the right balance for me: I can do things like rm -rf node_modules but still run my Windows commands fine.
It's easy to get Git Bash going inside cmder, however, I'd like to replace the shell with zhs, mainly to get the super-useful "up arrow respects the current prefix" feature (I write git, press the up arrow and only get suggestions on the recent Git commands).
The question is, who will handle the up arrow? Will it be ConEmu and do Windows-y stuff (cycling through all the commands) or will it fall down to zsh and the cycling will be implemented by it? How does this work?
Related: ConEmu: possible to change the up arrow behavior?
ConEmu's disclaimer states
ConEmu is not a shell, so it does not provide "shell features" like remote access, tab-completion, command history and others.
Only the shell itself knows when user types a command and only the shell may store executed commands history. Of course, only the shell may process Up/Down/Tab keys to "browse" stored history of commands.
cmder is a bundle of tools including clink, which integrates into cmd.exe and process cmd's prompt internally. So, in cmder by default Up/Down/Tab arrows are processed by clink.
More info is here: http://conemu.github.io/en/TabCompletion.html
I am trying to convert Varying Vagrant Vagrant's wordpress-trunk (or development) site to be provisioned via git instead of svn.
There seems to be a script (I presume it is a script even though it has no file extension) as part of the VVV project that will switch after the machine has been provisioned:
https://github.com/Varying-Vagrant-Vagrants/VVV/blob/master/config/homebin/develop_git
And the author told me that running the following from command line should do it:
vagrant ssh -c "develop_git"
but when I run that I get the following error:
Unknown cipher type 'develop_git'
There appears to be some code in the provision script that mentions git, but I have no idea what I am looking at.
So, does anyone know how to run/implement that script? Or otherwise convert the www/wordpress-trunk folder to git? Are there options somewhere to direct VVV to provision the trunk folder from git in the first place?
Contrary to the Vagrant-Documentation of vagrant ssh the -c option is delivered to the ssh command and therefore interpreted as the cipher-specification.
I would suggest you to try vagrant ssh -- "develop_git", since everything after "-- (two hyphens)[is] passed directly into the ssh executable".