How to tail/stream syslog in Nixos - syslog

I was looking here:
https://github.com/NixOS/nixos/blob/master/modules/services/logging/syslogd.nix
I tried a simple:
tail -f /var/log/syslog
but that yields:
tail: cannot open '/var/log/syslog' for reading: No such file or
directory

Did you enable syslogd? By default, system logging is done by journald.
Assuming you enabled syslogd
Look at the default configuration in the syslogd service definition to see where it logs. /var/log/syslog is currently not there or in the manpage.
Assuming journald
Use
journalctl -f

Related

Rsync command not working

I am trying to run rsync as follows and running into error sshpass: Failed to run command: No such file or directory .I verified the source /local/mnt/workspace/common/sectool and destination directories/prj/qct/wlan_rome_su_builds are available and accessible?what am I missing?how to fix this?
username#xxx-machine-02:~$ sshpass –p 'password' rsync –progress –avz –e ssh /local/mnt/workspace/common/sectool cnssbldsw#hydwclnxbld4:/prj/qct/wlan_rome_su_builds
sshpass: Failed to run command: No such file or directory
Would that be possible for you to check whether 'rsync' works without 'sshpass'?
Also, check whether the ports used by rsync is enabled. You can find the port info via cat /etc/services | grep rsync
The first thing is to make sure that ssh connection is working smoothly. You can check this via "sudo ssh -vvv cnssbldsw#hydwclnxbld4" (please post the message). In advance, If you are to receive any messages such as "ssh: connect to host hydwclnxbld4 port 22: Connection refused", the issue is with the openssh-server (not being installed or a broken package). Let's see what you get you get for the first command

Error happens when trying to umount the Lustre file system

When I umount Lustre FS it displays:
[root#cn17663-ens4 mnt]# umount /mnt/lustre
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
and if I add the force option -f it gives the same result:
[root#cn17663-ens4 mnt]# umount /mnt/lustre -f
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
When I try to list the directory it gives me :
[root#cn17663-ens4 mnt]# ls
ls: cannot access lustre: Cannot send after transport endpoint shutdown
lustre
and I cannot find what the reason is and cannot solve it.
Did you actually try running lsof /mnt/lustre (as the error message recommends) to see what is using the filesystem? This problem is not unique to Lustre, but true of any local filesystem as well - if there is a process using the filesystem (current working directory or open file) then it can't be unmounted until that process stops using it (cd out of /mnt/lustre or close the open file(s)).
I find I can use umount -l /mnt/xx to solve this problem!

Forwarding logs from file to journald

I have an application on the isolated machine. It writes logs to /var/log/app/log.txt for example. However, I want it to write logs to journald daemon. However, I can't change the way application run, because it is encapsulated.
I mean I can not do smth like app | systemd-cat
1) Am I right that all services started with systemd write logs to journald?
2) If so, will the children of process, started by systemd, will also write logs to journald?
3) Is there any way to tell journald to take logs from a specific file?
4) If not, are there any workarounds?
warning: this is not tested
You could mount bind /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/sbin/mount --bind /dev/stdout  /var/log/app/log.txt
Or soft link /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/bin/ln -s /dev/stdout  /var/log/app/log.txt
4) I can only try to help with a workaround:
MY_LOG_FILE=/var/log/app/log.txt
# Create a FIFO PIPE
PIPE=/tmp/my_fifo_pipe
mkfifo $PIPE
MY_IDENTIFIER="my_app_name" # just a label for later searching in journalctl
# Start logging to journal
systemd-cat -t $MY_IDENTIFIER -p info < $PIPE &
exec 3>$PIPE
tail -f $MY_LOG_FILE > $PIPE &
exec 3>&- #closing file descriptor 3 closes the fifo
This is the basic idea, you should now think about timings, when it's needed to have this started and when to be stopped.

How to log puppet agent and master

Puppet writes logging by default to syslog. Why is that? Most software write to some separate logfile. I checked the documentation and there is a mention that you can write to a log file but there was a mention that "This is generally not used." Is it a bad idea?
What is the typical setup for following the puppet logging? Using grep on the /var/log/messages file?
Since your mentioned syslog, I assume you were talking about Debian-like Linux.
Actually there is no need to write your own log facility. Customizing /etc/default/puppet is enough.
# Startup options
DAEMON_OPTS="--logdest /var/log/puppet/puppet.log"
/etc/default/puppet is sourced by /etc/init.d/puppet, so the options you added here will be executed when puppet service is started.
Docs about --logdest options : https://docs.puppetlabs.com/references/3.3.1/man/apply.html#OPTIONS
BTW, the deb package puppet provides for Debian(or Ubuntu) even includes a logrotate configuration file for /var/log/puppet, I don't know why this option is not default.
/var/log/puppet/*log {
missingok
sharedscripts
create 0644 puppet puppet
compress
rotate 4
postrotate
pkill -USR2 -u puppet -f 'puppet master' || true
[ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true
endscript
}
We are using puppet-dashboard for this purpose. It will give you a good overview on the environment, what is failing and what is working. And which servers have stopped checking in.
Its easy to setup, checkout http://puppetlabs.com/puppet/related-projects/dashboard/
If you want to log to a different file, you can use the syslogfacility configuration option in puppet ( http://docs.puppetlabs.com/references/stable/configuration.html#syslogfacility ), and configure syslog to log it to a different file.

problem while doing gzip over ssh

I am getting below error while running gzip command over ssh
ssh 123#HPUX "gzip"
ksh: gzip: not found
whereas if i am running tar in same way it is working properly.
ssh 123#HPUX "tar"
tar: usage tar [-]{txruc}[eONvVwAfblhm{op}][0-7[lmh]] [tapefile] [blocksize] [[-C directory] file] ...
Can you please suggest why am i getting this error and how can i overcome this problem ?
When i tried following step gzip is working properly
ssh 123#HPUX
gzip
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
which means that gzip is working.
Your $path may be set differently for an interactive login session, versus
executing a single command via ssh. Does it work if you specify an absolute path to gzip?
Try logging in interactively, and use the command which gzip to show where the
binary is. Perhaps it's something like /usr/local/gnu/gzip . (You might want to do
echo $path too, and make a note of it for comparison purposes.) Then try using
that path in your batch SSH command, i.e. ssh 123#HPUX "/usr/local/gnu/gzip" to see
what happens. The command ssh 123#HPUX 'echo $path' (note single quotes!) should tell you how your $path is set in that context -- if you compare that to your interactive $path, you'll probably see a difference that explains why gzip isn't found in the first version of your batch command.
Wild guess: it's ksh raising the error the first time. When you do a full ssh login, are you using ksh? Are you running any scripts that modify its path?

Resources