How do I determine if a terminal is color-capable? - unix

I would like to change a program to automatically detect whether a terminal is color-capable or not, so when I run said program from within a non-color capable terminal (say M-x shell in (X)Emacs), color is automatically turned off.
I don't want to hardcode the program to detect TERM={emacs,dumb}.
I am thinking that termcap/terminfo should be able to help with this, but so far I've only managed to cobble together this (n)curses-using snippet of code, which fails badly when it can't find the terminal:
#include <stdlib.h>
#include <curses.h>
int main(void) {
int colors=0;
initscr();
start_color();
colors=has_colors() ? 1 : 0;
endwin();
printf(colors ? "YES\n" : "NO\n");
exit(0);
}
I.e. I get this:
$ gcc -Wall -lncurses -o hep hep.c
$ echo $TERM
xterm
$ ./hep
YES
$ export TERM=dumb
$ ./hep
NO
$ export TERM=emacs
$ ./hep
Error opening terminal: emacs.
$
which is... suboptimal.

A friend pointed me towards tput(1), and I cooked up this solution:
#!/bin/sh
# ack-wrapper - use tput to try and detect whether the terminal is
# color-capable, and call ack-grep accordingly.
OPTION='--nocolor'
COLORS=$(tput colors 2> /dev/null)
if [ $? = 0 ] && [ $COLORS -gt 2 ]; then
OPTION=''
fi
exec ack-grep $OPTION "$#"
which works for me. It would be great if I had a way to integrate it into ack, though.

You almost had it, except that you need to use the lower-level curses function setupterm instead of initscr. setupterm just performs enough initialization to read terminfo data, and if you pass in a pointer to an error result value (the last argument) it will return an error value instead of emitting error messages and exiting (the default behavior for initscr).
#include <stdlib.h>
#include <curses.h>
int main(void) {
char *term = getenv("TERM");
int erret = 0;
if (setupterm(NULL, 1, &erret) == ERR) {
char *errmsg = "unknown error";
switch (erret) {
case 1: errmsg = "terminal is hardcopy, cannot be used for curses applications"; break;
case 0: errmsg = "terminal could not be found, or not enough information for curses applications"; break;
case -1: errmsg = "terminfo entry could not be found"; break;
}
printf("Color support for terminal \"%s\" unknown (error %d: %s).\n", term, erret, errmsg);
exit(1);
}
bool colors = has_colors();
printf("Terminal \"%s\" %s colors.\n", term, colors ? "has" : "does not have");
return 0;
}
Additional information about using setupterm is available in the curs_terminfo(3X) man page (x-man-page://3x/curs_terminfo) and Writing Programs with NCURSES.

Look up the terminfo(5) entry for the terminal type and check the Co (max_colors) entry. That's how many colors the terminal supports.

Related

using Qt's QProcess as popen (with ffmpeg rawvideo)

I inserted some code in a video application to export using ffmpeg
with stdin (rawideo rgba format), to quickly test that it worked I
used popen(), the tests went well and since the application is
written using Qt I thought of modify the patch using QProcess and
->write().
The application shows no errors and works properly but the generated
video files are not playable neither with vlc nor with mplayer while
those generated with popen() work perfectly with both. I have the
feeling that ->close() or ->terminate() does not properly close
ffmpeg and consequently the file, but I don't know how to verify it
nor I found alternative ways to end the executed command, beside
->waitForBytesWritten() should wait for the data to be written,
suggestions? Am I doing something wrong?
(Obviously I can't prepare a testable example it would take me more
time than the patch took)
Below is the code I entered, in the case #else the Qt code
Initialization
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pipe_frame.file = popen("/tmp/ffmpeg-rawpipe.sh", "w");
if (pipe_frame.file == NULL) {
return false;
}
#else
pipe_frame.qproc = new QProcess;
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly);
if(!pipe_frame.qproc->waitForStarted()) {
return false;
}
#endif
Writing a frame
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
fwrite(pipe_frame.data, pipe_frame.width*4*pipe_frame.height , 1, pipe_frame.file);
#else
qint64 towrite = pipe_frame.width*4*pipe_frame.height,
written = 0, partial;
while(written < towrite) {
partial = pipe_frame.qproc->write(&pipe_frame.data[written], towrite-written);
pipe_frame.qproc->waitForBytesWritten(-1);
written += partial;
}
#endif
Termination
#if defined(EXPORT_POPEN) && EXPORT_POPEN == 1
pclose(pipe_frame.file);
#else
pipe_frame.qproc->terminate();
//pipe_frame.qproc->close();
#endif
edit
ffmpeg-rawpipe.sh
#!/bin/sh
exec ffmpeg-cuda -y -f rawvideo -s 1920x1080 -pix_fmt rgba -r 25 -i - -an -c:v h264_nvenc \
-cq:v 19 \
-profile:v high /tmp/test.mp4
I made some changes, I added the unbuffered flag to the open
pipe_frame.qproc->start("/tmp/ffmpeg-rawpipe.sh", QIODevice::WriteOnly|QIODevice::Unbuffered);
And therefore simplified the write
qint64 towrite = pipe_frame.width*4*pipe_frame.height;
pipe_frame.qproc->write(pipe_frame.data, towrite);
pipe_frame.qproc->waitForBytesWritten(-1);
I added a closeWriteChannel before closing the application (hoping that stopping the stdin ffmpeg pipe ends properly, just in case, I'm not sure it doesn't)
pipe_frame.qproc->waitForBytesWritten(-1);
pipe_frame.qproc->closeWriteChannel();
//pipe_frame.qproc->terminate();
pipe_frame.qproc->close();
But nothing changes, the mp4 file is created and contains data but from the mplayer log I see that it is misinterpreted, the video format is not recognized and it looks for an audio that is not there.
Fixed, adding waitForFinished() after closeWriteChannel(), closes stdin and wait for ffmpeg to terminate on its own.
pipe_frame.qproc->waitForBytesWritten(-1); // perhaps not necessary
pipe_frame.qproc->closeWriteChannel();
pipe_frame.qproc->waitForFinished();
pipe_frame.qproc->close();
edit
Note, even if initialized with the unbuffered flag, QProcess and QIODevice seem to buffer quite a lot, it seems as if waitForBytesWritten () is not working, and if you are feeding HD video you will go out of memory very quickly.

Qt output (qDebug qWarning etc) does not work if application is executed via cronjob

I created a reproduction sample for this:
#include <iostream>
#include <QtCore/QLoggingCategory>
#include <QtCore/QDebug>
#include <QtCore/QtCore>
using namespace std;
int main () {
int i;
QLoggingCategory::setFilterRules("*.debug=true\n");
QLoggingCategory LogO(NULL);
if (LogO.isDebugEnabled()) {
cout << "QDebug enabled\n";
} else {
cout << "QDebug disabled!\n";
}
cout << "Start!\n";
qDebug() << "qStart!";
cerr << "print to stderr.\n";
qWarning() << "qWarning";
return 0;
}
Build steps:
g++ -c -fPIC -I/usr/include/qt5 main.cpp -o main.o
g++ -fPIC main.o -L /usr/lib64 -lQt5Core -o testapp
When executing the application in an interactive shell, output redirection works as expected:
Setup:
./testapp > out 2> err
Output:
>>cat out:
QDebug enabled
Start!
>>cat err:
qStart!
print to stderr.
qWarning
However, it does not work if the application is executed as a cronjob, the output of qDebug() and qWarning() is missing:
Setup:
* * * * * username /home/username/temp/build/testapp 1> /home/username/temp/log/out 2> /home/username/temp/log/err
Output:
>>cat /home/username/temp/log/out
QDebug enabled
Start!
>>cat home/username/temp/log/err
print to stderr.
Enviroment vars
The output of env in the interative shell is as follows:
LS_COLORS=*long string*
SSH_CONNECTION=*censored*
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=*censored*
XDG_SESSION_ID=492
USER=username
SELINUX_ROLE_REQUESTED=
PWD=/home/username/temp/build
HOME=/home/username
SSH_CLIENT=*censored*
SELINUX_LEVEL_REQUESTED=
SSH_TTY=/dev/pts/0
MAIL=/var/spool/mail/username
TERM=xterm
SHELL=/bin/bash
SELINUX_USE_CURRENT_RANGE=
SHLVL=1
LOGNAME=username
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/usr/lib64/ccache:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/username/.local/bin:/home/username/bin
HISTSIZE=1000
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
OLDPWD=/home/username/temp/build/logs
The output of env when called via cronjob is as follows:
LS_COLORS=*long string*
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
HOSTNAME=*censored*
XDG_SESSION_ID=995
USER=username
PWD=/home/username
HOME=/home/username
MAIL=/var/spool/mail/username
TERM=xterm
SHELL=/bin/bash
SHLVL=1
LOGNAME=username
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
XDG_RUNTIME_DIR=/run/user/1000
PATH=/usr/lib64/ccache:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/username/.local/bin:/home/username/bin
LESSOPEN=||/usr/bin/lesspipe.sh %s
_=/usr/bin/env
The issue is that qt behaves differently depending if it thinks that it is running in an (interactive?) terminal or not.
Quote:
One pitfall to be aware of: the destination of logging depends on an environment variable. If the variable QT_LOGGING_TO_CONSOLE is set to 1, the message functions will always log to the console. If set to 0, they will not log to the console, and will instead log to syslog, if enabled. When the environment variable is not set, the message functions log to a console if one is present (i.e. if the program is attached to a terminal). Thus, to ensure that the output of our example program goes to syslog, I set the environment variable to 0 within the program.
Therefore, the output of qDebug, QWarning etc. when executed from cron was not output via stderr, but directly handed over to journald.
TL;DR: quickfix: add QT_LOGGING_TO_CONSOLE=1 to /etc/crontab
.
.
PS: note if you need to debug an issue with QDebug:
be aware of this: https://bugzilla.redhat.com/show_bug.cgi?id=1227295
you can add QT_LOGGING_DEBUG=1 as an environment variable to make
qt output changes in logging behavior during execution.
cronjob redirects output and err to email address.
add >> /tmp/myscript.log 2>&1 in your crontab entry.
see this answer

SCP always returns the same error code

I have a problem copying files with scp. I use Qt and copy my files with scp using QProcess. And when something bad happens I always get exitCode=1. It always returns 1. I tried copying files with a terminal. The first time I got the error "Permission denied" and the exit code was 1. Then I unplugged my Ethernet cable and got the error "Network is unreachable". And the return code was still 1. It confuses me very much cause in my application I have to distinct these types of errors.
Any help is appreciated. Thank you so much!
See this code as a working example:
bool Utility::untarScript(QString filename, QString& statusMessages)
{
// Untar tar-bzip2 file, only extract script to temp-folder
QProcess tar;
QStringList arguments;
arguments << "-xvjf";
arguments << filename;
arguments << "-C";
arguments << QDir::tempPath();
arguments << "--strip-components=1";
arguments << "--wildcards";
arguments << "*/folder.*";
// tar -xjf $file -C $tmpDir --strip-components=1 --wildcards
tar.start("tar", arguments);
// Wait for tar to finish
if (tar.waitForFinished(10000) == true)
{
if (tar.exitCode() == 0)
{
statusMessages.append(tar.readAllStandardError());
return true;
}
}
statusMessages.append(tar.readAllStandardError());
statusMessages.append(tar.readAllStandardOutput());
statusMessages.append(QString("Exitcode = %1\n").arg(tar.exitCode()));
return false;
}
It gathers all available process output for you to analyse. Especially look at readAllStandardError().

64-bit ELF yielding unexplainable results

Can someone explain why the following code yields different results on the second printf if I comment the first printf line or not, in 64 bits?
/* gcc -O0 -o test test.c */
#include <stdio.h>
#include <stdlib.h>
int main() {
char a[20] = {0};
char b = 'a';
int count=-1;
// printf("%.16llx %.16llx\n", a, &b);
printf("%x\n", *(a+count));
return 0;
}
I get the following results for the second printf:
commented: 0
uncommented: 61
Thanks in advance!
iansus
Can someone explain why the following code yields different results on the second printf if I comment the first printf line or not
Your program uses a[-1], and thus exhibits undefined behavior. Anything can happen, and figuring out exactly why one or the other thing happenes is pointless.
The precise reason is that you are reading memory that gets written to by the first printf (when commented in).
I get a different result (which is expected with undefined behavior):
// with first `printf` commented out:
ffffffff
// with it commented in:
00007fffffffdd20 00007fffffffdd1b
ffffffff
You could see where that memory is written to by setting a GDB watchpoint on it:
(gdb) p a[-1]
$1 = 0 '\000'
(gdb) p &a[-1]
$2 = 0x7fffffffdd1f ""
(gdb) watch *(int*)0x7fffffffdd1f
Hardware watchpoint 4: *(int*)0x7fffffffdd1f
(gdb) c
Continuing.
Hardware watchpoint 4: *(int*)0x7fffffffdd1f
Old value = 0
New value = 255
main () at t.c:12
12 printf("%.16llx %.16llx\n", a, &b);
It my case above, the value is written as part of initializing count=-1. That is, with my version of gcc, count is located just before a[0]. But this may depend on compiler version, exactly how this compiler was built, etc. etc.

Unix If file exists, rename

I am working on a UNIX task where i want check if a particular log file is present in the directory or not. If it is present, i would like to rename it by appending a timestamp at the end. The format of the file name is as such: ServiceFileName_0.log
This is what i have so far but it wouldn't rename when i run the script, even though there is a file with the name ServiceFileName_0.log present.
renameLogs()
{
#If a ServiceFileName log exists, rename it
if [ -f $MY_DIR/logs/ServiceFileName_0.log ];
then
mv ServiceFileName_0.log ServiceFileName_0.log.%M%H%S
fi
}
Pls Help!
Thanks
renameLogs()
{
if [ -f $MY_DIR/logs/ServiceFileName_0.log ]
then mv $MY_DIR/ServiceFileName_0.log $MY_DIR/ServiceFileName_0.log.$(date +%M%H%S)
fi
}
Use the directory prefix consistently. Also you need to specify the time properly, as shown.
Better, though (less repetition):
renameLogs()
{
logfile="$MY_DIR/logs/ServiceFileName_0.log"
if [ -f "$logfile" ]
then mv "$logfile" "$logfile.$(date +%H%M%S)"
fi
}
NB: I've reordered the format from MMHHSS to the more conventional HHMMSS order. If you work with date components too, you should seriously consider using the ordering recommended by ISO 8601, which is [YYYY]mmdd. It groups all the log files for a month together in an ls listing, which is usually helpful. Using ddmm order means that the files for the first of each month are grouped together, then the files for the second of each month, etc. This is usually less desirable.
You might need to prefix the file name with the $MY_DIR path, just like you did in the test.
You could replace this:
mv ServiceFileName_0.log ServiceFileName_0.log.%M%H%S
with this
mv $MY_DIR/logs/ServiceFileName_0.log $MY_DIR/logs/ServiceFileName_0.log.%M%H%S
This isn't your apparent immediate problem, but the if construct is wrong: it introduces a time-of-check to time-of-use race condition. In between the if [ -f check and the mv, some other process could come along and change things so you can't move the file anymore even though the check succeeded.
To avoid this class of bugs, always write code that starts by attempting the operation you want to do, then if it failed, figure out why. In this case, what you want is to do nothing if the source file didn't exist, but report an error if the operation failed for any other reason. There is no good way to do that in portable shell, you need something that lets you inspect errno. I'd probably write this C helper:
#include <stdio.h>
#include <errno.h>
#include <string.h>
int main(int argc, char **argv)
{
if (argc != 3) {
fprintf(stderr, "usage: %s source destination\n", argv[0]);
return 2;
}
if (rename(argv[1], argv[2]) && errno != ENOENT) {
fprintf(stderr, "rename '%s' to '%s': %s\n",
argv[1], argv[2], strerror(errno));
return 1;
}
return 0;
}
and then use it like so:
renameLogs()
{
( cd "$MY_DIR/logs"
rename_if_exists ServiceFileName_0.log ServiceFileName_0.log.$(date +%M%H%S)
)
}
The ( cd construct fixes your immediate problem, and unlike the other suggestions, avoids another race in which some other process comes along and messes with the logs directory or its parent directories.
Obligatory shell scripting addendum: Always enclose variable expansions in double quotes, except in the rare cases where you want the expansion to be subject to word splitting.

Resources