How to redirect TO stdout? - unix

I have a UNIX application written in ansi C that writes data directly to a file. This file is specified by one of the argument parameters.
For testing purposes, I can use /dev/null for the filename, which effectively redirects the output to nothing.
I would like to be able to redirect the output to stdout by a similar method. Is this possible? If so, how? I've tried the following with no luck:
a.out -f /dev/ttys000
(where /dev/ttys000 was the tty specified by a 'w' listing)

/dev/stdout

You could detect the string "stdout" argument and then use the stdout filehandle in C (1)
http://en.wikipedia.org/wiki/File_descriptor
Or use /dev/stdout or /dev/fd/1
If this is a 'built-in' feature rather than a temporary thing for testing, you might want to use the C functions on the stdout file descriptor rather than the device node as the C standard is a bit more hardy than the POSIX standard imho.

Related

terraform plan output redirection to file does not capture error messages

I had previously successfully captured terraform plan outputs to a file by using the unix output redirections like this:
tf plan -no-color > plan.txt
When my plan (or apply) has errors, the output text file is empty although I see the error messages in the terminal output.
How can I capture output even in the cases there are errors?
If you want to redirect everything you can do the following:
tf plan -no-color > plan.txt 2>&1
I found that it is because of stderr (standard error) and stdout (standard output) outputs in unix, when referring to the information here.
2> can be used to redirect stderr.
&> can be used to redirect both stdout and std err.
1> or > can be used to redirect stdout.
Eventually I used this command to redirect to 2 files (as I preferred):
tf plan -no-color 2> err.txt 1> out.txt
More info
My initial command,
tf plan -no-color > plan.txt
will only redirect stdout to the file.
Hence when the plan had errors, there was no stdout, the empty output overwrote and emptied my plan.txt file, and the errors went to stderr.
Terraform has a built in variable TF_LOG_PATH to redirect all output including errors to a specific file. You don't need to redirect the output.
example:
export TF_LOG_PATH=/mydirectory/mylogfile.log
terraform plan
Source: https://www.terraform.io/docs/internals/debugging.html

Makefile wildcard function evaluates to true against empty string

Using GNU Make 4.2.1
I'm writing a Makefile, and I want to use a conditional statement to have make check whether it is on a specific remote server or not. I'd like to do this using the unix HOSTNAME environment variable. I also want it to run regardless of subdomain, so I use the make wildcard function.
ifeq ($(wildcard *.remote.server.com),$(HOSTNAME))
echo "ON REMOTE SERVER"
else
echo "NOT ON REMOTE SERVER"
endif
This looks like it would work, but on my local machine the HOSTNAME environment variable is not set and the ifeq test evaluates to true and prints ON REMOTE SERVER.
This doesn't make sense to me; *.remote.server.com is being compared to an empty string and should evaluate false and print NOT ON REMOTE SERVER.
Am I missing something about either unix environment variables, wildcard, or make conditionals, or all three?
Edit: Problem resolved. Learned that this is not the spot to use the wildcard function. Instead, used something similar to the following line
ifeq "$(shell hostname | sed -e 's/.*.remote.server.com/remote.server.com/')" "remote.server.com"
Based on this response:
How to use bash regex inside Makefile Target
You could just print the values of these things to see what they are. Something like $(info wildcard='$(wildcard *.remote.server.com)' hostname='$(HOSTNAME)')? The reason for this behavior depends entirely on your local system, which we do not have access to.
However, I don't see why you say that *.remote.server.com is being compared to an empty string. You've written $(wildcard *.remote.server.com) and presumably you don't have a file that matched the glob *.remote.server.com, which means the wildcard function expands to the empty string.
I think you might be confused about what the wildcard function does: what did you expect it to do? It has nothing whatever to do with hostnames.

How does execlp work exactly?

So I am looking at my professor's code that he handed out to try and give us an idea of how to implement >, <, | support into our unix shell. I ran his code and was amazed at what actually happened.
if( pid == 0 )
{
close(1); // close
fd = creat( "userlist", 0644 ); // then open
execlp( "who", "who", NULL ); // and run
perror( "execlp" );
exit(1);
}
This created a userlist file in the directory I was currently in, with the "who" data inside that file. I don't see where any connection between fd, and execlp are being made. How did execlp manage to put the information into userlist? How did execlp even know userlist existed?
Read Advanced Linux Programming. It has several chapters related to the issue. And we cannot explain all this in a few sentences. See also the standard stream and process wikipages.
First, all the system calls (see syscalls(2) for a list, and read the documentation of every individual system call that you are using) your program is doing should be tested against failure. But assume they all succeed. After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it, hence fd is 1; you have redirected your stdout to the newline created userlist file.
At last, you are calling execlp(3) which will call execve(2). When successful, your entire process is restarted with the new executable (so a fresh virtual address space is given to it), and its stdout is still the userlist file descriptor. In particular (unless execve fails) the perror call is not reached.
So your code is a bit what a shell running who > userlist is doing; it does a redirection of stdout to userlist and runs the who command.
If you are coding a shell, use strace(1) -notably with -f option- to understand what system calls are done. Try also strace -f /bin/sh -c ls to look into the behavior of a shell. Study also the source code of existing free software shells (e.g. bash and sash).
See also this and the references I gave there.
execlp knowns nothing. Before execing stdout was closed and a file opened, so the descriptor is the one corresponding to stdout (opens always returns the lowest free descriptor). At that point the process has an "stdout" plugged to the file. Then exec is called and this replaces to whole address space, but some properties remains as the descriptors, so know the code of who is executed with an stdout that correspond to the file. This is the way redirections are managed by shells.
Remember that when you use printf (for example) you never specify what stdout exactly is... That can be a file, a terminal, etc.
Basile Starynkevitch correctly explained:
After close(1); the file descriptor 1 (STDOUT_FILENO) is free. So creat("userlist",0644) is likely to re-use it…
This is because, as Jean-Baptiste Yunès wrote, "opens always returns the lowest free descriptor".
It should be stressed that the professor's code only likely works; it fails if file descriptor 0 is closed.

What is the use case for this zsh precommand modifier '-'

So in zsh; do this
$ - ls /some/non/existent/directory/blah/blah/blah
gives you
-ls: /some/non/existent/directory/blah/blah/blah: No such file or directory
Documentation:
http://zsh.sourceforge.net/Doc/Release/Shell-Grammar.html#Precommand-Modifiers
What reasonable use case does this actualy have?
From zshmisc(1):
- The command is executed with a `-' prepended to its argv[0]
string.
Invoking a shell with a - prepended to its name (-sh, -bash, -zsh) is an old convention for indicating the shell should start a login session. It's up to the program itself to decide if such an invocation should mean anything. Most programs, like ls, ignore how they are called.

Redirect output to one file and errors to another in Unix

Let's say I have a command called "enjoy." I'm expecting enjoy to give valid output and an error message. How do I call enjoy such that the valid output goes to one file and the error messages go to another file?
enjoy > log.txt 2> errors.txt
Assuming of course that you've used STDOUT and STDERR properly and you're using a nice shell. If you're using csh, you need to do something more complicated:
(enjoy > log.txt) >& errors.txt
This works because >& redirects both STDOUT and STDERR - but STDOUT has already been redirected. The parentheses make sure that STDOUT is long gone before the data gets anywhere near the overzealous >&.

Resources