I am working with network shell (nsh; bmc software) I believe it is based on zsh 4.3.4. I have written a script that connects to a list of variable solaris machines and runs numerous commands and then creates some local directories and files based off of those commands.
I am looking for a way to display the script's progress as it can take some time depending on the number of servers. I have been told by others I need to utilize pv or dialog. However, in nsh when attempting to run these commands I get "command not found." It could be a limitation of nsh as well.
As a simple example, I want to see the progress of the following:
for i in $(cat serverlist.txt)
do
nexec -i $i hostname >> hosts.txt
done
Of course my script is a lot more complex than this but I cannot seem to get it working correctly as pv and dialog are not available. Also I know I should be using read -r to truncate the file, but appears not to work correctly either.
Related
I have been using the default bash of linux for over 1 year and one of colleague recommended me switch over to using iTerm2 alongwith zsh and oh-my-zsh. He also recommended this post to install and configure those :
https://www.freecodecamp.org/news/how-to-configure-your-macos-terminal-with-zsh-like-a-pro-c0ab3f3c1156/
When i asked what are they the answer he gave me was kind of confusing, so i ask you overlords kind do tell mewhat actually those are and if you have some insights about what actually a bash is, i would be happy to learn that too :)
Thank you all
Pawan
When you are at a command line, typing in commands and reading output you are working in a program called a terminal (or console on windows). The terminal is taking your commands and forwarding them to a program, called a shell, who's job is to actually execute the commands you type in to the terminal and possibly print some output. The output from the shell is then displayed in your terminal window.
The terminal is like the web browser and the shell is like the javascript engine. Your browser takes your input (click, keypresses, mousemoves) and sends them along to javascript which processes those actions and the browser displays the results.
iTerm2 is a terminal emulator meant to be a replacement for macOS terminal and is far more feature rich. It's the terminal program providing you with a command line interface.
ZSH is a specific shell, as is bash, the same way linux is a specific operating system. There are different shells that provide different syntax, features and functionality. There's bash, cshell, fish, powershell, zsh and others.
Installing ZSH, you are essentially downloading a new program and telling your terminal to use that program (say, instead of bash) to process the commands and run scripts.
oh-my-zsh provides a way of managing your zsh configurations, themes and plugins to extend the look and functionality of your shell.
I can't reccomend this setup enough- it's like the cadillac of command lines. You have a good friend there.
is there any way to play inform7 from the command line? I'd like to write some automated test script that plays the play with certain commands and don't want to do it manually. Is there any way to do that?
This is easiest to do with the CLI Linux package of Inform 7. It contains a perl CLI script you can run, but you may also like to consider this alternative script I wrote: https://github.com/i7/kerkerkruip/blob/master/tools/build-i7-project
You can invoke this with
build-i7-project -p "Project Folder"
(Leave off the .inform.)
You can also run the binaries which are installed with the IDE packages by themselves instead of installing the CLI Linux package. The command line options are probably mostly the same in other operating systems, but you may need to change them slightly. If you can't get it to work, compare with what the Inform 7 IDE says when you build with it.
If what you really want to do is periodically run some test scripts that verify that your work is still performing as expected, then Inform 7 has the capability do do that from within the IDE. Take a look at chapter 24.2 of Writing with Inform for details. In combination with good use of the Skein, this should handle the more common unit-testing requirements.
Of course, if you're doing something more outré, running bash scripts from the command line may wind up being the way to go. Still, don't do any more work than you have to. :)
Using Automator on OSX
Passing selected files/folders to rsync using a service. It is working and copies are succesful. Although want two things to happen and one question as to how to reference the "current" users desktop in the destination.
Want it to open terminal and visibly show the copy process with the --progress option that rsync offers. A GUI window would be nice but wanting simple for now.
Would like a simple "completed" message to be displayed somehow. Either in the terminal window or a GUI message.
Where the First.Last is used in the destination path, how can this be defined to simple use the current users desktop?
Below is what is working now but without the 3 things mentioned above.
for f in "$#"; do
/usr/bin/rsync --verbose --progress --times "$f" /Users/First.Last/Desktop/copy
echo "$f COMPLETED"
done
Thanks for any and all suggestions.
Is this possible to do?
Conceptually, a solution should apply across a lot of possible configurations, ranging from two vim instances running in separate virtual terminals in panes in a tmux window, to being in separate terminals on separate machines in separate geographical regions, one or both connected over network (in other words, the vims are hosted by two separate shell processes, which they would already be under tmux anyhow).
The case that prompted me to ponder this:
I have two tmux panels both with vim open and I want to use the Vim yank/paste to copy across the files.
But it only works if I've got them both running in the same instance of Vim, so I am forced to either:
use tmux's copy/paste feature to get the content over (which is somewhat tedious and finicky), or
use the terminal (PuTTY, iTerm2)'s copy/paste feature to get the content over (which is similarly tedious but not subject to network latency, however this only works up to a certain size of text payload to copy at which point this method will not work at all due to the terminal not knowing the contents of the not-currently-visible parts of the file), or
lose Vim buffer history/context and possibly shell history/context in reopening the file manually in one of the Vim instances in either a split buffer or tab and then closing the other terminal context (much less tedious than 1 for large payloads but more so with small payloads).
This is a bit of a PITA and could all be avoided if I have the foresight of switching to an appropriate terminal already running vim to open my files but the destiny of workflow and habit rarely match up with that which would have been convenient.
So the question is, does there exist a command or the possibility of a straighforwardly-constructed (shell) script that allows me to join buffers across independently running vim instances? Am having a hard time getting Google to answer that adequately.
In the absence of an adequate answer (or if it is determined with reasonable certainty that Vim does not possess the features to accomplish the transfer of buffers across its instances), a good implementation (bindable to keys) for approach 3 above is acceptable.
Meanwhile I'll go back to customizing my vim config further and forcing myself to use as few instances of vim as possible.
No, Vim can't share a session between multiple instances. This is how it's designed and it doesn't provide any session-sharing facility. Registers, on-the-fly mappings/settings, command history, etc. are local to a Vim session and you can't realistically do anything about that.
But your title is a bit misleading: you wrote "buffer" but it looks like you are only after copying/pasting (which involves "register", not "buffers") from one Vim instance to another. Is that right? If so, why don't you simply get yourself a proper build with clipboard support?
Copying/yanking across instances is as easy as "+y in one and "+p in another.
Obviously, this won't work if your Vim instances are on different systems. In such a situation, "+y in the source Vim and system-provided paste in the destination Vim (possibly with :set paste) is the most common solution.
If you are on a Mac, install MacVim and move the accompanying mvim shell script somewhere in your path. You can use the MacVim executable in your terminal with mvim -v.
If you are on Linux, install the vim-gnome package from your package manager.
If you are on Windows, install the latest "Vim without Cream".
But the whole thing looks like an XY problem to me. Using Vim's built-in :e[dit] command efficiently is probably the best solution to what appears to be your underlying problem: editing many files from many different shells.
I've been using R in Ubuntu to make system calls using system() for things like spinning up Amazon EC2 instances, managing files on S3, etc. If I start R from the command line everything works fine. But if I start R from a script using Rscript, or from ESS, I have issues with environment variables not being set.
I think this is an issue with me not properly grokking where to set environment variables in Ubuntu. I thought the "right place" (for some definition of "right") was to set user environment variables in ~/.bashrc. This is where I set things like export EC2_HOME=/home/jd/ec2 but when I execute R from ESS and make system calls, the .bashrc script is not being run. I've tried Googing about and I see many an exegesis on environment variables in Ubuntu, such as this one. My knee jerk reaction is to try each recommendation in the aforementioned thread and stop giving a shit as soon as one of the options works. But then I end up with non-standard settings which bite me in the ass later.
So how should I set environment variables so that they are properly set when I run a system() call in R?
You can try to set them in R itself using Sys.setenv.
I think you are confusing the issue. I fear this may be about login shells versus non-login shells. See the bash manual page for the fine print ... which has driven me bonkers in the past.
That said, if you can set environment variables system-wide, you have a few options:
/etc/environment is a very good place as it is shell-agnostic should you ever use a different shell
for login versus non-login shells, the one way to get complete control that I found suitable was to put my changes into something like ~/.local_bashrc
the add . ~/.local_bashrc from and and all of
~./bashrc
~/.bash_profile
~/.profile`
etc pp.
You can precede the sourcing with a echo Hello from FILE where you replace FILE with the name of the file. That shows you the difference between shells starting from login (eg via gdm et al), via ssh connection, via new xterm etc terminals and so on.
You can force the system to read your .bashrc file by using the source command
source ~/.bashrc
Lots of inelegant and ugly ways to apply this