is there a way to put comments in a unix command line? - unix

I'm writing a program (in python) that calls a separate program (via subprocess). I'm finding that in some cases the sub program is getting stuck running. I can see the sub-program by running top, and if i press "c", I can see the full command line.
What I want, is to be able to stick debugging data (like current thread id, etc) in the command line when i'm calling the sub program, so I can futher debug my problem.
Is there a way to put comments in command line arguments such that they show up in top?

I can't think of a direct way but you could write a little shell script to which you pass the actual command to run plus argument and debugging information. It would show up in the top/ps output.

Instead of making them comments, put them in the environment. For example, if you have a /proc file system, you could do:
FOO=value cmd
When top shows the pid of the command, do:
tr '\000' '\012' < /proc/pid/environ | grep FOO
to see the value of FOO in the environment of the cmd. If the values contain newlines, you will need to be more careful about the display, something like:
perl -n0E 'say if /FOO/' /proc/pid/environ

Related

ZSH vi normal mode to move around printed text

Can I use zsh vi normal mode to move around previous commands output or the printed text in the shell to copy/yank it ?
For example in the screenshot below. I want to move to the output of ls to copy something. When I press j/k zsh cycle my command history but doesn't move up to the printed text. j/k move one line down/up only when I have multiple line command that I'm currently writing but haven't executed yet.
To the best of my knowledge, the ability to access the output of commands interactively is the domain of your terminal (-emulator), not the shell. You would use commands like sed, awk, grep, possibly in a pipe, to access, manipulate and use output you know in advance is the part you are interested in.
To access the output with keyboard shortcuts/command-keys, I suggest using the like of tmux - it allows to copy/yank from the whole terminal display as if it was a text-file in an editor.

Can't edit command in Zsh

I have a custom prompt in Zsh. At the end of it, I colour the last character red or green depending on if the last command succeeded or failed. However, when I do this, I can't go back and edit previous commands.
This is the prompt code:
%{%F%(?.$fg[green].$fg[red])>%f%}
An example workflow:
I enter a command that wraps onto a new line:
> printf "%s\n" "This is a very long printf. How long is it? It's so very very long that it wraps onto the next line."
After this runs, I hit up arrow and modify the command by deleting and retyping "is it". Now, the command line shows:
> printf "%s\n" "This is a very long printf. How long is it It's so very very long that it wraps onto the next line."
This prints out:
This is a very long printf. How long iis it It's so very very long that it wraps onto the next line.
I assume that I'm not terminating the color codes somehow so that the prompt is spilling over into the actual commands I'm trying to enter. It only misbehaves when the prompt wraps around to a new line. Can anyone see what's wrong with my prompt?
I've verified that, without this snippet of code, the rest of the prompt is fine and behaves as you'd expect.
zsh is confused about how long the prompt actually is. The shell already knows that its own %F escape doesn't contribute to the on-screen length of the prompt; you don't need to wrap it in %{...%} like you would similarly do in bash.
PS1="%F%(?.$fg[green].$fg[red])>%f"
If fg contains actual terminal-specific escape sequences, then you would need %{...%}, but you wouldn't use %F at all, as this isn't how you use it. So you might actually need something like
PS1="%(?.%{$fg[green]%}.%{$fg[red]}%})>%f"
But, you don't need a separate array of colors; zsh has them built in as well.
PS1='%(?.%F{green}.%F{red})>%f '

R command history: how to configure up-arrow to treat "multiline, brace-enclosed input" as one line?

This question is about configuring the R console to behave like a bash shell when it comes to navigating the command history. It is somewhat related to the ?history. For brace-enclosed multi-lines, I'd like to configure the command history navigation of R to be similar to bash.
Presently when running R in an xterm under Linux, using the up-arrow to navigate the command history causes each previous line to be recalled one by one, even if a set of lines had been enclosed in braces. This occurs, for example, when copy/pasting a multi-line function from a text editor into the R console. Not so with bash.
Here is an example of how bash functions in this regard:
In a bash shell within an xterm under Linux, after typing the following five lines...
a=1
{
x=1
y=1
}
... the first press of the up-arrow will recall a single line reformulation of the brace-enclosed commands, like this ...
{ x=1; y=1; }
... and the second press will recall this ...
a=1
It seems that in R, the up-arrow navigates backwards one line at a time, regardless of encapsulation. Is there a way to configure R so that it's command history navigation functions like bash's?
You could use rlwrap. I use it for other console programs and it works very well. You will need to prepend the R command with the rlwrap binary and then your history lines can be restored in a number of ways, including multi-line matching.
Workaround for Linux/Unix
Similarly as in Rstudio (thanks to Ari B. Friedman comment), where user in R console is using ShiftEnter to bypass RETURN, you can start newline (in R terminal) without accepting newline command using Ctrl-VCtrl-J. This way the multi-line command will be accepted into history as one-liner with line-feeds instead of enters and you will even have the chance to edit it. You can even manage in your .inputrc file to have custom combination for this action.
I do not think direct reconfiguration of R is possible.
Readline man page may help more.

typeset functions location

When I use the command typeset -f in ksh, a list of functions with their definition is displayed in stdout.
I tried to search where those functions are defined, but I couldn't find any hint about them. Can anyone help me finding them?
EDIT
I'm just learning the use of the typeset command, typing man typeset game me nothing (no manual entry for typeset).
In order to define functions that will be displayed using typeset -f, we need to define a function and export it using typeset -xf.
Functions can be declared in the .profile, or files called from .profile or put in a dir that is referenced by the FPATH variable (and proabably other places too). Read your man ksh carefully for the order of files that are processed on startup. Search for the 'Invocation', 'Files', and 'Functions' sections.
Also, there are a group of default functions that ksh sets up. So please edit your question to show the function names that your concerned with.
IHTH
Shells don't keep a record of where functions (or aliases, or variables, etc...) are defined. Conceptually, and notwithstanding interactive usage features like shell history, shells read commands from input one at a time, execute them, and then forget them. Sometimes those commands come from interactive input, sometimes they come from scripts. Sometimes they have side effects like defining a function in the shell's environment, but the shell still doesn't remember the command or its position in the shell's input stream after it's finished executing it.

simple shell script in cygwin

#!/bin/bash
echo 'first line' >foo.xml
echo 'second line' >>foo.xml
I am a total newbie to shell scripting.
I am trying to run the above script in cygwin. I want to be able to write one line after the other to a new file.
However, when I execute the above script, I see the follwoing contents in foo.xml:
second line
The second time I run the script, I see in foo.xml:
second line
second line
and so on.
Also, I see the following error displayed at the command prompt after running the script:
: No such file or directory.xml
I will eventually be running this script on a unix box, I am just trying to develop it using cygwin. So I would appreciate it if you could point out if it is a cygwin oddity and if so, should I avoid trying to use cygwin for development of such scripts?
Thanks in advance.
Run dos2unix on your shell script. That will fix the problem.
I had the same kind of problem as the original poster: A very simple script file was not working in Cygwin.
Thanks to Don Branson for the clue.
The fix for me was built into the text editor I'm using. (Most programmer's editors have a feature like this.) For example, in my case I'm using Notepad++, which has a menu item to convert the file line endings to Unix-style. From the menu: [Edit]->[EOL Conversion]->[Unix (LF)]
Then the script behaved as expected.
But there must be something else that is wrong here. When I try it, it works as expected.
> foo.xml puts the line into foo.xml, replacing any previous contents.
>> foo.xml appends to file
It sounds like you may have a typo somewhere. Also keep in mind that while the Windows command prompt can be forgiving about paths with embedded spaces, cygwin's shells will not be, so if you have a filename that contains embedded spaces, you need to either quote the filename or escape the spaces:
echo 'first line' > 'My File.txt'
echo 'first line' > My\ File.txt
The same goes for certain "special" characters including quotes, ampersand (&), semicolons (;) and generally most punctuation other than period/full-stop (.).
So if you are seeing those issues using the exact script that you are running (i.e. you copy and pasted it, there is no possibility of transcription errors) then something truly strange may be happening that I can't explain. Otherwise, there may be a misplaced space or unquoted character somewhere.
I cannot reproduce your results. The script you quote looks correct, and indeed works as expected in my installation of Cygwin here, producing the file foo.xml containing the lines first line and second line; implying that what you are actually running differs from what you quoted in some way that is causing the problem.
The error message implies some sort of problem with the filename in the first echo line. Do you have some nonprintable characters in the script you are running? Have you missed escaping a space in the filename? Are you subsituting shell variables and mistyping the name of the variable or failing to escape the resulting string?
The above should work normally..
However you can always specify a heredoc:
#!/bin/bash
cat <<EOF > foo.xml
first line
second line
EOF

Resources