Moving a buffer in vim across instances of vim - unix

Is this possible to do?
Conceptually, a solution should apply across a lot of possible configurations, ranging from two vim instances running in separate virtual terminals in panes in a tmux window, to being in separate terminals on separate machines in separate geographical regions, one or both connected over network (in other words, the vims are hosted by two separate shell processes, which they would already be under tmux anyhow).
The case that prompted me to ponder this:
I have two tmux panels both with vim open and I want to use the Vim yank/paste to copy across the files.
But it only works if I've got them both running in the same instance of Vim, so I am forced to either:
use tmux's copy/paste feature to get the content over (which is somewhat tedious and finicky), or
use the terminal (PuTTY, iTerm2)'s copy/paste feature to get the content over (which is similarly tedious but not subject to network latency, however this only works up to a certain size of text payload to copy at which point this method will not work at all due to the terminal not knowing the contents of the not-currently-visible parts of the file), or
lose Vim buffer history/context and possibly shell history/context in reopening the file manually in one of the Vim instances in either a split buffer or tab and then closing the other terminal context (much less tedious than 1 for large payloads but more so with small payloads).
This is a bit of a PITA and could all be avoided if I have the foresight of switching to an appropriate terminal already running vim to open my files but the destiny of workflow and habit rarely match up with that which would have been convenient.
So the question is, does there exist a command or the possibility of a straighforwardly-constructed (shell) script that allows me to join buffers across independently running vim instances? Am having a hard time getting Google to answer that adequately.
In the absence of an adequate answer (or if it is determined with reasonable certainty that Vim does not possess the features to accomplish the transfer of buffers across its instances), a good implementation (bindable to keys) for approach 3 above is acceptable.
Meanwhile I'll go back to customizing my vim config further and forcing myself to use as few instances of vim as possible.

No, Vim can't share a session between multiple instances. This is how it's designed and it doesn't provide any session-sharing facility. Registers, on-the-fly mappings/settings, command history, etc. are local to a Vim session and you can't realistically do anything about that.
But your title is a bit misleading: you wrote "buffer" but it looks like you are only after copying/pasting (which involves "register", not "buffers") from one Vim instance to another. Is that right? If so, why don't you simply get yourself a proper build with clipboard support?
Copying/yanking across instances is as easy as "+y in one and "+p in another.
Obviously, this won't work if your Vim instances are on different systems. In such a situation, "+y in the source Vim and system-provided paste in the destination Vim (possibly with :set paste) is the most common solution.
If you are on a Mac, install MacVim and move the accompanying mvim shell script somewhere in your path. You can use the MacVim executable in your terminal with mvim -v.
If you are on Linux, install the vim-gnome package from your package manager.
If you are on Windows, install the latest "Vim without Cream".
But the whole thing looks like an XY problem to me. Using Vim's built-in :e[dit] command efficiently is probably the best solution to what appears to be your underlying problem: editing many files from many different shells.

Related

New to Coq: How to compile .vo files and run command line?

I'm new to working with Coq, and I'm progressing through the first volume of the Software Foundations book, but I can't for the life of me figure out how to compile the Basics.v file for the second chapter on induction.
I've seen things floating around about using the Coqc command on the command line, but I don't know how to access the terminal, or at least the windows terminal doesn't recognize the command. If anyone could walk me through this it'd be much appreciated!
In case you are using CoqIDE, there is a menu item "Compile/Compile buffer", which creates a .vo file for the currently loaded .v file. For a few files and early stages of learning, this might be the easiest way.
Then SF likely comes with a make file which you can just run with make. You didn't say what OS your are using - on Linux and Mac this should be trivial, on Windows it depends on how you did install Coq. The Windows installer doesn't come with make, but if you used the Coq Platform scripts to setup Coq on Windows, everything is there.
Otherwise it might get a bit complicated - you need to pass the right options to coqc (which CoqIDE and make do automatically for you).

What is merit of terminal multiplexer compared to standard terminal app and job control?

I don't know what is a merit of a terminal multiplexer like screen or tmux compared to combination of a standard terminal application and a job control feature of a shell.
Typically good features of a terminal multiplexer are cited as follows:
persistance
multiple windows
session sharing
First two features are, however, achieved with a terminal application like iTerm2 and a job control feature of a shell like bash.
Session sharing is a novel feature but it seems to be required in a quite rare situation.
What is a merit of terminal multiplexer? Why do you use it?
I'm interested especially in its merit in daily task.
I can tell you from my perspective as a devops/developer.
Almost every day I have to deploy a bunch of apps (a particular version) on multiple servers. Handling that without something like Terminator or Tmux would be a real pain.
On a single window I can put something like 4 panes (four windows in one) and monitor stuff on 4 different servers...which by it self is a huge deal...without tabs or other terminal instances and what not....
On the first pane I can shutdown nginx, on the second server I can shut down all the processes with supervisord (process manager), and on the third pane I can do the deploy process...if I quickly need to jump to some other server I just use the fourth pane...
Some colleagues that only use a bunch of terminal instances can get really confused when they have to do a bunch of things quickly, constantly ssh-ing in and out ...and if they are not careful they can go to the wrong server because they switched to the wrong terminal instance and entered a command that wasn't meant for that particular server :)...
A terminal multiplexer like Tmux really does help me to be quick and accurate.
There is an package manager for Tmux, which lets you install plugins and really supercharge you terminal even more!
On a different note, a lot of people are using Tmux in combination with Vim...which lets you create some awesome things together...
All in all, those were my two cents on the benefit of using a terminal multiplexer...

Looking for a pv or dialog equivalent

I am working with network shell (nsh; bmc software) I believe it is based on zsh 4.3.4. I have written a script that connects to a list of variable solaris machines and runs numerous commands and then creates some local directories and files based off of those commands.
I am looking for a way to display the script's progress as it can take some time depending on the number of servers. I have been told by others I need to utilize pv or dialog. However, in nsh when attempting to run these commands I get "command not found." It could be a limitation of nsh as well.
As a simple example, I want to see the progress of the following:
for i in $(cat serverlist.txt)
do
nexec -i $i hostname >> hosts.txt
done
Of course my script is a lot more complex than this but I cannot seem to get it working correctly as pv and dialog are not available. Also I know I should be using read -r to truncate the file, but appears not to work correctly either.

How switch R architectures dynamically in RStudio

In RStudio there's a Tools menu which allows you to select an installed version/architecture of R under Global Options.
That's great, but my issue with that is that, as the name implies, it is a Global option, so once you select a different architecture (or version number) you then have to restart RStudio and it applies to all of your RStudio instances and projects.
This is a problem for me because:
I have some scripts within a given project that strictly require 32-bit R due to the fact that they're interfacing with 32-bit databases, such as Hortonworks' Hadoop
I have other scripts within the same project which strictly require 64-bit R, due to (a) availability of certain packages and (b) memory limits being prohibitively small in 32-bit R on my OS
which we can call "Issue #1" and it's also a problem because I have certain projects which require a specific architecture, though all the scripts within the project use the same architecture (which should theoretically be an easier to solve problem that we can call "Issue #2").
If we can solve Issue #1 then Issue #2 is solved as well. If we can solve Issue #2 I'll still be better off, even if Issue #1 is unsolved.
I'm basically asking if anyone has a hack, work-around, or better workflow to address this need for frequently switching architectures and/or needing to run different architectures in different R/RStudio sessions simultaneously for different projects on a regular basis.
I know that this functionality would probably represent a feature request for RStudio and if this question is not appropriate for StackOverflow for that reason then let me know and I'll delete it. I just figured that a lot of other people probably have this issue, so maybe someone has found a work-around/hack?
There's no simple way to do this, but there are some workarounds. One you might consider is launching the correct bit-flavor of R from the current bit-flavor of R via system2 invoking Rscript.exe, e.g. (untested code):
source32 <- function(file) {
system2("C:\\Program Files\\R\\R-3.1.0\\bin\\i386\\Rscript.exe", normalizePath(file))
}
...
# Run a 64 bit script
source("my64.R")
# Run a 32 bit script
source32("my32.R")
Of course that doesn't really give you a 32 bit interactive session so much as the ability to run code as 32 bit.
One other tip: If you hold down CTRL while launching RStudio, you can pick the R flavor and bitness to launch on startup. This will save you some time if you're switching a lot.

set environment variables for system() in R?

I've been using R in Ubuntu to make system calls using system() for things like spinning up Amazon EC2 instances, managing files on S3, etc. If I start R from the command line everything works fine. But if I start R from a script using Rscript, or from ESS, I have issues with environment variables not being set.
I think this is an issue with me not properly grokking where to set environment variables in Ubuntu. I thought the "right place" (for some definition of "right") was to set user environment variables in ~/.bashrc. This is where I set things like export EC2_HOME=/home/jd/ec2 but when I execute R from ESS and make system calls, the .bashrc script is not being run. I've tried Googing about and I see many an exegesis on environment variables in Ubuntu, such as this one. My knee jerk reaction is to try each recommendation in the aforementioned thread and stop giving a shit as soon as one of the options works. But then I end up with non-standard settings which bite me in the ass later.
So how should I set environment variables so that they are properly set when I run a system() call in R?
You can try to set them in R itself using Sys.setenv.
I think you are confusing the issue. I fear this may be about login shells versus non-login shells. See the bash manual page for the fine print ... which has driven me bonkers in the past.
That said, if you can set environment variables system-wide, you have a few options:
/etc/environment is a very good place as it is shell-agnostic should you ever use a different shell
for login versus non-login shells, the one way to get complete control that I found suitable was to put my changes into something like ~/.local_bashrc
the add . ~/.local_bashrc from and and all of
~./bashrc
~/.bash_profile
~/.profile`
etc pp.
You can precede the sourcing with a echo Hello from FILE where you replace FILE with the name of the file. That shows you the difference between shells starting from login (eg via gdm et al), via ssh connection, via new xterm etc terminals and so on.
You can force the system to read your .bashrc file by using the source command
source ~/.bashrc
Lots of inelegant and ugly ways to apply this

Resources