Search and replace in multiple files using vim - unix

Is it possible to apply the same search and replace in multiple files in vim? I'll give an example below.
I have multiple .txt files — sad1.txt until sad5.txt. To open them, I'll use vim sad* and it opened already. Now inside the 5 txt files they have similar word like happy999; I would like to change it to happy111. I am currently using this code:
argdo %s/happy999/happy111/gc | wq!
Eventually only the sad1.txt is changed. What should I do to run one script in the 5 txt files?

Use:
:set aw
:argdo %s/happy999/happy111/g
The first line sets auto-write mode, so when you switch between files, vim will write the file if it has changed.
The second line does your global search and replace.
Note that it doesn't use wq! since that exits. If you don't want to use auto-write, then you could use:
:argdo %s/happy999/happy111/g | w
This avoids terminating vim at the end of editing the first file.
Also consider looking on vi and vim for answers to questions about vi and vim.

That is a task for sed -i (-i for "in place", works only with GNU sed). Yet, if you really want to use vim or you do need the /c to confirm the replace, you can do it in two ways:
With some help from the shell:
for i in sad*.txt; do
vim -c ':%s/happy999/happy111/gc' -c ':wq' "$i"
done
(the /c will still work, and vim will ask for each confirmation)
Or with pure VIM
vim -c ':%s/happy999/happy111/gc' -c ':w' -c ':n' \
-c ':%s/happy999/happy111/gc' -c ':w' -c ':n' \
-c ':%s/happy999/happy111/gc' -c ':w' -c ':n' \
-c ':%s/happy999/happy111/gc' -c ':w' -c ':n' \
-c ':%s/happy999/happy111/gc' -c ':wq' sad*.txt
(In my humble opinion this last one looks horrible and repetitive and has no real advantages over the shell for, but it shows that pure vim can do it)

No doubt, argdo is great, but to type that much boilerplate becomes quite annoying over the time.
Give a try to far.vim. It's such a tool that provide many IDEs.

If you don't need/want to be prompted for confirmation on each search and replace, use the following command, after opening your files with vim sad*:
:argdo %s/happy999/happy111/g | update
You can find more info by looking at the documentation for argdo in vim (:h argdo) or here:
http://vim.wikia.com/wiki/Search_and_replace_in_multiple_buffers

Related

What's the difference between -a and -e in a zsh conditional expression?

I was looking up the meaning of flags like -a in zsh if statements, eg.
if [[ -a file.txt ]]; do
# do something
fi
and I found this
-a file
true if file exists.
-e file
true if file exists.
What is the difference between -a and -e? And if there is none, why do they both exist?
POSIX sheds some light on this.
tl;dr: Ksh traditionally used -a and several other shells followed suit. POSIX instead borrowed -e from Csh to avoid confusion. Now many shells support both.
The -e primary, possessing similar functionality to that provided by the C shell, was added because it provides the only way for a shell script to find out if a file exists without trying to open the file. Since implementations are allowed to add additional file types, a portable script cannot use:
test -b foo -o -c foo -o -d foo -o -f foo -o -p foo
to find out if foo is an existing file. On historical BSD systems, the existence of a file could be determined by:
test -f foo -o -d foo
but there was no easy way to determine that an existing file was a regular file. An early proposal used the KornShell -a primary (with the same meaning), but this was changed to -e because there were concerns about the high probability of humans confusing the -a primary with the -a binary operator.

tmux run fish shell functions returns error 127

In my .tmux.conf I have those lines:
set -g default-terminal "xterm-256color"
set-option -g default-shell /usr/bin/fish
bind -n M-I run "fish_prompt"
But pressing M-I I get error 127 as response. Ordinary bash functions like echo 123 works fine, only fish functions are not found.
default-shell sets the shell to use in a new pane; it doesn't affect the shell used by the run-shell command, which remains /bin/sh.
As explained above, run-shell always uses /bin/sh (as defined by _PATH_BSHELL in tmux's source).
To run a fish shell, you can use run "fish -c fish_prompt", but that mucks up the escape characters and produces the output in a not-terribly-helpful way.
What output do you want to see - are you using fish_prompt or some other function?

Getting command running in a tmux pane

Is there a way to show the command which is currently running at a tmux pane?
I tried 'history', but it does not seem to show the commands which I had executed at tmux.
I also tried 'ps -ef', but it does not show the full command in the case like "./a.sh ; ./b.sh"
I found several answers online that include ps ... | tail -1. Unfortunately, these don't always work because sometimes the order of the commands is swapped, e.g. for two separate panes I get:
$ ps -t /dev/pts/12 -o args=
-bash
mpv some_movie.mp4
$ ps -t /dev/pts/10 -o args=
micro some_file.txt
-bash
I really wanted a single line of output so that I could show it in the status bar, but what I ultimately ended up going with is ps --forest via run-shell. It seems to always reliably show the correct order and with more information should there be nested commands running (e.g. via a bash script). Its output looks like:
$ ps --forest -o args -g $$
COMMAND
-bash
\_ ps --forest -o args -g 1695
Solution
So in my .tmux.conf, I've got:
bind '`' run-shell 'ps --forest -o pid,args -g #{pane_pid}'
It will replace the contents of your pane with the output from the ps --forest command. Once you type esc or ^C, the ps output disappears, and your pane goes back to whatever it was doing :) Ends up looking like:
running script.sh, which calls other-script.sh, which sleeps for 30s
viewing pane process tree via keybinding
(Old question but for future reference)
Try: tmux list-panes -t <your_pane_name> -F '#{pane_current_command}'
https://man7.org/linux/man-pages/man1/tmux.1.html#FORMATS
Try setting pane-border-status to bottom or top in your configuration file, with the tmux command prompt or just running tmux set pane-border-status bottom. Borders should appear around the panes and info about the current process appears much like in a regular terminal window's title bar.
I suspect the command wasn't written to the history file as the shell with the stuck/long-running job wasn't done yet.
You might try pstree -U to see process in their parent/child tree.

Clozure CL compiled executable losing certain command line arguments

I'm writing a utility program in Common Lisp and building it with Clozure CL; I would like to be able to use the command-line option -d with the program, but for some reason this particular option won't make it through to (ccl::command-line-arguments). Here is a minimal example:
(defun main ()
(format t "~s~%" (ccl::command-line-arguments))
(quit))
I compiled with
(save-application "opts"
:toplevel-function 'main
:prepend-kernel t)
and here's some sample output:
~/dev/scratch$ ./opts -c -a -e
("./opts" "-c" "-a" "-e")
~/dev/scratch$ ./opts -c -d -e
("./opts" "-c" "-e")
~/dev/scratch$ ./opts -b --frogs -c -d -e -f -g -h --eye --jay -k -l
("./opts" "--frogs" "-c" "-e" "-f" "-g" "-h" "--eye" "--jay" "-k" "-l")
The -b and -d options appear to be getting lost. The documentation on command line arguments for ccl isn't very helpful. I thought maybe because ccl itself takes -b as an argument, that option might have gotten eaten for some reason, but it doesn't take -d (which is eaten), and it does take -e and -l which aren't. Nothing on saving applications seemed helpful.
I'm pretty sure it's Clozure-specific (and not, say, the shell eating them), because other stuff seems to be getting all the arguments:
#!/usr/bin/python
import sys
print sys.argv
yields
~/dev/scratch$ ./opts.py -a -b -c -d -e
['./opts.py', '-a', '-b', '-c', '-d', '-e']
and
#!/bin/bash
echo "$#"
gives
~/dev/scratch$ ./opts.sh -a -b -c -d -e
-a -b -c -d -e
This is all taking place on lubuntu 15.10 with bash as the shell.
If anyone could shed some light on why this is happening or how I can end up with all my command-line switches, I'd be appreciative.
Thanks.
According to the source code of the 1.11 release, -b and -d are options used by the lisp kernel.
Since I'm unsure about licence issues, I just provide the link to the relevant file: http://svn.clozure.com/publicsvn/openmcl/release/1.11/source/lisp-kernel/pmcl-kernel.c
Command line arguments are processed in the function process_options, where for options -b (--batch) and -d (--debug) - among others - a variable num_elide is set to 1. A bit further down, this leads to overwriting the option with the following argument (argv[k] = argv[j];).
The code also shows a possible fix: Supply -- (two dashes) once as argument before -b or -d. When above function encounters a -- it stops processing the rest of the arguments, thus leaving them unchanged to be possibly taken up into "lisp world" shortly after.
Turns out this has already been solved at SO before:
https://stackoverflow.com/a/5522169/1116364

Pipe output of cat to cURL to download a list of files

I have a list URLs in a file called urls.txt. Each line contains 1 URL. I want to download all of the files at once using cURL. I can't seem to get the right one-liner down.
I tried:
$ cat urls.txt | xargs -0 curl -O
But that only gives me the last file in the list.
This works for me:
$ xargs -n 1 curl -O < urls.txt
I'm in FreeBSD. Your xargs may work differently.
Note that this runs sequential curls, which you may view as unnecessarily heavy. If you'd like to save some of that overhead, the following may work in bash:
$ mapfile -t urls < urls.txt
$ curl ${urls[#]/#/-O }
This saves your URL list to an array, then expands the array with options to curl to cause targets to be downloaded. The curl command can take multiple URLs and fetch all of them, recycling the existing connection (HTTP/1.1), but it needs the -O option before each one in order to download and save each target. Note that characters within some URLs ] may need to be escaped to avoid interacting with your shell.
Or if you are using a POSIX shell rather than bash:
$ curl $(printf ' -O %s' $(cat urls.txt))
This relies on printf's behaviour of repeating the format pattern to exhaust the list of data arguments; not all stand-alone printfs will do this.
Note that this non-xargs method also may bump up against system limits for very large lists of URLs. Research ARG_MAX and MAX_ARG_STRLEN if this is a concern.
A very simple solution would be the following:
If you have a file 'file.txt' like
url="http://www.google.de"
url="http://www.yahoo.de"
url="http://www.bing.de"
Then you can use curl and simply do
curl -K file.txt
And curl will call all Urls contained in your file.txt!
So if you have control over your input-file-format, maybe this is the simplest solution for you!
Or you could just do this:
cat urls.txt | xargs curl -O
You only need to use the -I parameter when you want to insert the cat output in the middle of a command.
xargs -P 10 | curl
GNU xargs -P can run multiple curl processes in parallel. E.g. to run 10 processes:
xargs -P 10 -n 1 curl -O < urls.txt
This will speed up download 10x if your maximum download speed if not reached and if the server does not throttle IPs, which is the most common scenario.
Just don't set -P too high or your RAM may be overwhelmed.
GNU parallel can achieve similar results.
The downside of those methods is that they don't use a single connection for all files, which what curl does if you pass multiple URLs to it at once as in:
curl -O out1.txt http://exmple.com/1 -O out2.txt http://exmple.com/2
as mentioned at https://serverfault.com/questions/199434/how-do-i-make-curl-use-keepalive-from-the-command-line
Maybe combining both methods would give the best results? But I imagine that parallelization is more important than keeping the connection alive.
See also: Parallel download using Curl command line utility
Here is how I do it on a Mac (OSX), but it should work equally well on other systems:
What you need is a text file that contains your links for curl
like so:
http://www.site1.com/subdirectory/file1-[01-15].jpg
http://www.site1.com/subdirectory/file2-[01-15].jpg
.
.
http://www.site1.com/subdirectory/file3287-[01-15].jpg
In this hypothetical case, the text file has 3287 lines and each line is coding for 15 pictures.
Let's say we save these links in a text file called testcurl.txt on the top level (/) of our hard drive.
Now we have to go into the terminal and enter the following command in the bash shell:
for i in "`cat /testcurl.txt`" ; do curl -O "$i" ; done
Make sure you are using back ticks (`)
Also make sure the flag (-O) is a capital O and NOT a zero
with the -O flag, the original filename will be taken
Happy downloading!
As others have rightly mentioned:
-cat urls.txt | xargs -0 curl -O
+cat urls.txt | xargs -n1 curl -O
However, this paradigm is a very bad idea, especially if all of your URLs come from the same server -- you're not only going to be spawning another curl instance, but will also be establishing a new TCP connection for each request, which is highly inefficient, and even more so with the now ubiquitous https.
Please use this instead:
-cat urls.txt | xargs -n1 curl -O
+cat urls.txt | wget -i/dev/fd/0
Or, even simpler:
-cat urls.txt | wget -i/dev/fd/0
+wget -i/dev/fd/0 < urls.txt
Simplest yet:
-wget -i/dev/fd/0 < urls.txt
+wget -iurls.txt

Resources