Distcc .distcc/zeroconf/hosts contained no hosts - networking

I am getting an error from distcc. I am using the package from the repos. Here is my configuration
$ cat /etc/default/distcc | grep -v \#
STARTDISTCC="true"
ALLOWEDNETS="127.0.0.0/16 10.0.0.0/8"
LISTENER="0.0.0.0"
NICE="10"
JOBS="3"
ZEROCONF="true"
$ cat /etc/distcc/hosts | grep -v \#
+zeroconf
$ dpkg -l | grep distcc
ii distcc 3.1-6 amd64 simple distributed compiler client and server
ii distcc-pump 3.1-6 amd64 pump mode for distcc a distributed compiler client and server
$ cat ~/.distcc/zeroconf/hosts
10.16.114.52:3632/16
$ ifconfig
...
inet addr:10.16.114.52 Bcast:10.16.115.255 Mask:255.255.252.0
...
When I run a bunch of compilations (1000 C files I generated) like,
distcc gcc -o 41.o -c 41.c
I get the error,
distcc[26927] (dcc_parse_hosts) Warning: /home/amacdonald/.distcc/zeroconf/hosts contained no hosts; can't distribute work
distcc[26927] (dcc_zeroconf_add_hosts) CRITICAL! failed to parse host file.
distcc[26927] (dcc_build_somewhere) Warning: failed to distribute, running locally instead
distcc[26929] (dcc_parse_hosts) Warning: /home/amacdonald/.distcc/zeroconf/hosts contained no hosts; can't distribute work
distcc[26929] (dcc_zeroconf_add_hosts) CRITICAL! failed to parse host file.

You need a hosts file with the list of machines that run distcc. Use this path:
~/.distcc/hosts
For example:
10.0.0.1 10.0.0.2 10.0.0.42

Even with the hosts environment variable and hosts files, in /etc and /usr/local/etc, I distcc still couldn't find any hosts.
After checking the log:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /var/log/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.
I had to run:
sudo update-distcc-symlinks
sudo ln -s /usr/lib/distcc /usr/local/lib/distcc
To get it working.

Related

rsync to local USB disk gives "rsync error 2 (Protocol incompatibility)

I am using backintime for backup which in turn uses rsync to make snapshots.
Most filesystems on the computer are XFS, including the rsync target,
system is Ubuntu 20.04 with rsync version 3.1.3 protocol version 31.
I get an exit code 2 from rsync which is Protocol incompatibility,
and some digging shows this happens if you are running rsync across some (ssh) connection
between two computers with different rsync versions, or login scripts injecting some unexpected output into the ssh connection. None of that is the case here, this is all local,
see below for the command line.
=> Anymore insights into this rsync error ? How can a local Protocol incompatibility happen
if there is just one /usr/bin/rsync ?
Yours,
Steffen
The local USB disk mounted as
type xfs (rw,nosuid,nodev,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota,uhelper=udisks2)
INFO: Call rsync to take the snapshot
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
WARNING: Command "rsync --recursive --times --devices --specials --hard-links --human-readable \
--links --acls --xattrs --perms --executability --group --owner --info=progress2 \
--no-inc-recursive --delete --delete-excluded -v -i \
--out-format=BACKINTIME: %i %n%L --link-dest=../../20210301-082432-781/backup \
--chmod=Du+wx --exclude=/media/sneumann/LinuxBackup/msbi-corei \
--exclude=/root/.local/share/backintime --exclude=.local/share/backintime/mnt \
--exclude=.gvfs --exclude=.cache* --exclude=[Cc]ache* --exclude=.thumbnails* \
--exclude=[Tt]rash* --exclude=*.backup* --exclude=*~ \
--exclude=/home/sneumann/Ubuntu One --exclude=.dropbox* --exclude=/proc/* \
--exclude=/sys/* --exclude=/dev/* --exclude=/run/* --exclude=/media \
--exclude=/root/.local/share/backintime/takesnapshot_.log \
--exclude=/root/.local/share/backintime --include=/ --include=/** \
--exclude=* / /media/sneumann/LinuxBackup/msbi-corei/backintime/msbi-corei/root/1/new_snapshot/backup"
returns 2

How to disable "private mount namespace" (sandboxing) with the Nix package manager?

I'm trying to use nix on repl.it. I'm using static-nix from https://matthewbauer.us/blog/static-nix.html. If I run the following code:
mkdir -p "$HOME/.cache/nix/"
curl https://matthewbauer.us/nix > "$HOME/.cache/nix/nix.exe"
cat "$HOME/.cache/nix/nix.exe" | bash -s run --no-sandbox --store "$HOME/.cache/nix/store" -f channel:nixpkgs-unstable bash graphviz -c sh -c 'dot --help'
I get this error:
error: setting up a private mount namespace: Operation not permitted
I tried --no-sandbox, --option sandbox false and --option build-use-sandbox false, none of these have any effect on the error.
This is executed as non-root on a machine for which it is not possible for me to change kernel settings.
Here's a REPL reproducing the issue (it runs for a short while before displaying the error): https://repl.it/#suzannesoy/AgonizingWittyCoding#main.sh

Nagios Alert returns "NRPE: Unable to read output" Command: check_service!httpd

I have installed Nagios on Redhat with the following configurations:
/usr/local/nagios/etc/static/commands.cfg
define command {
command_name check_service
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_service -a $ARG1$
}
When I try to run it manually:
if i try to use the following syntax, I get error:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a check_http
NRPE: Unable to read output
not using nope:
/usr/local/nagios/libexec/check_http -H 10.111.55.92
HTTP OK: HTTP/1.1 200 OK - 4298 bytes in 0.024 second response time |time=0.024462s;;;0.000000 size=4298B;;;0
I am consistently getting Nagios Email notifications:
HOST: Proxy (Dev) i-01aa24242424d7
IP: 10.111.55.92
Service: Apache Running
Service State: UNKNOWN
Attempts: 3/3
Duration: 0d 9h 28m 49s
Command: check_service!httpd
\More Details:
NRPE: Unable to read output
Not sure how I can use nrpe with check_service to check http
Just. running the check_nrpe with check_http displays the version of installed nope
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -a check_http
NRPE v3.2.1
/usr/local/nagios/etc/nrpe.cfg
command[check_users]=/usr/local/nagios/libexec/check_users -w 10 -c 15
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_root_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 10 -c 15 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_ping]=/usr/local/nagios/libexec/check_ping $ARG1$
command[check_http]=/usr/local/nagios/libexec/check_http
# LINUX DEFAULT
command[check_service]=/bin/sudo -n /bin/systemctl status -l $ARG1$
# GLUSTER CHECKS
command[check_glusterdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /gluster
# GITLAB CHECKS
command[gitlab_ctl]=/bin/sudo -n /bin/gitlab-ctl status $ARG1$
command[gitlab_rake]=/bin/sudo -n /bin/gitlab-rake gitlab:check
command[check_gitlabdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /var/opt/gitlab
# OPENSHIFT CHECKS
command[check_openshift_pods]=/usr/local/nagios/libexec/check_pods
File: /usr/local/nagios/etc/nagios.cfg
cfg_dir=/usr/local/nagios/etc/static
You seem to be confusing two plugins. check_service will just check a service is running locally. Try calling it like this:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a httpd
I'd hesitate to use the check_service command you have in there though. Giving nrpe access to run systemctl with sudo seems dangerous to me.
check_http is an http client. It will actually connect to an http server and check a given URI. It can check status codes and do all sorts of things.
It looks like in your nrpe.cfg you didn't include any arguments to check_http. It will just print its help message if you call it like that, I don't think it will check the local machine.
Note that when you call check_http above manually, you supply -H. That -H is not passed through automatically, you need to provide arguments to your check_http command in nrpe.cfg.
Change the line:
command[check_http]=/usr/local/nagios/libexec/check_http
To something like:
command[check_http]=/usr/local/nagios/libexec/check_http -H 127.0.0.1
And it should work better assuming your http is listening on localhost.
You probably don't want to call check_http via nrpe like this though. Let your nagios server call check_http out to the remote machine.

SSH between N number of servers using script

I have n number of servers like c0001.test.cloud.com, c0002.test.cloud.com, c0003.test.cloud.com and I want to do the ssh between these servers like:
from Server: c0001 do the ssh to c0002 and then exit the server.
Come back to c0001 do the ssh to c0003 and then exit the server.
So in this way it will execute the script without entering any input during runtime and we can have n number of servers.
I have written one script :
str1=c0001.test.cloud.com,c0002.test.cloud.com,c0003.test.cloud.com
string="$( cut -d ',' -f 2- <<< "$str1" )"
echo "$string"
for j in $(echo $string | sed "s/,/ /g")
do
ssh appAccount#j
done
But this script is not running fine. I have also checked it by passing parameters
like: -o StrictHostKeyChecking=no and <<'ENDSSH' but it is not working.
Assuming the number of commands you want to run are small, you could:
Create a script of commands that will run from c0001.test.cloud.com to each of the servers. For example, create a file on your local machine called commands.sh with:
hosts="c0002.test.cloud.com c0003.test.cloud.com"
for host in $hosts do
ssh -o StrictHostKeyChecking=no -q appAccount#$host <command 1> && <command 2>
done
On your local machine, ssh to c0001.test.cloud.com and execute the commands in commands.sh:
ssh -o StrictHostKeyChecking=no -q appAccount#c0001.test.cloud.com 'bash -s' < commands.sh
However, if your requirements become more complex, a more robust solution might be to use a cluster administration tool such as ClusterShell

run apps using audio in a docker container

This question is inspired by Can you run GUI apps in a docker container?.
The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...)
I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp.
(security sandboxing of the applications)
https://gist.github.com/hybris42/ce429de428e5af3a344a
https://github.com/jlund/docker-chrome-pulseaudio
https://github.com/tomparys/docker-skype-pulseaudio
In my case I would prefere playing audio from an app inside the container directly to my host pulseaudio. (without ssh tunneling and bloated docker images)
Pulseaudio because my qt app is using it ;)
it took me some time until i found out what is needed. (Ubuntu)
we start with the docker run command docker run -ti --rm myContainer sh -c "echo run something"
ALSA:
we need /dev/snd and some hardware access as it looks like.
when we put this together we have
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--lxc-conf='lxc.cgroup.devices.allow = c 116:* rwm' \
myContainer sh -c "echo run something"`
In new docker versions without lxc flags you shoud use this:
docker run -ti --rm \
-v /dev/snd:/dev/snd \
--privileged \
myContainer sh -c "echo run something"`
PULSEAUDIO:
update: it may be enought to mount the pulseaudio socket within the container using -v option. this depends on your version and prefered access method. see other answers for the socket method.
Here we need basically /dev/shm, /etc/machine-id and /run/user/$uid/pulse. But that is not all (maybe because of Ubuntu and how they did it in the past). The envirorment variable XDG_RUNTIME_DIR has to be the same in the host system and in your docker container. You may also need /var/lib/dbus because some apps are accessing the machine id from here (may only containing a symbolic link to the 'real' machine id). And at least you may need the hidden home folder ~/.pulse for some temp data (i am not sure about this).
docker run -ti --rm \
-v /dev/shm:/dev/shm \
-v /etc/machine-id:/etc/machine-id \
-v /run/user/$uid/pulse:/run/user/$uid/pulse \
-v /var/lib/dbus:/var/lib/dbus \
-v ~/.pulse:/home/$dockerUsername/.pulse \
myContainer sh -c "echo run something"
In new docker versions you might need to add --privileged.
Of course you can combine both together and use it together with xServer ui forwarding like here: https://stackoverflow.com/a/28971413/2835523
Just to mention:
you can handle most of this (all without the used id) in the dockerfile
using uid=$(id -u) to get the user id and gid with id -g
creating a docker user with this id
create user script:
mkdir -p /home/$dockerUsername && \
echo "$dockerUsername:x:${uid}:${gid}:$dockerUsername,,,:/home/$dockerUsername:/bin/bash" >> /etc/passwd && \
echo "$dockerUsername:x:${uid}:" >> /etc/group && \
mkdir /etc/sudoers.d && \
echo "$dockerUsername ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$dockerUsername && \
chmod 0440 /etc/sudoers.d/$dockerUsername && \
chown ${uid}:${gid} -R /home/$dockerUsername
Inspired by the links you've posted, I was able to create the following solution. It is as lightweight as I could get it. However, I'm not sure if it is (1) secure, and (2) entirely fits your use-case (as it still uses the network).
Install paprefson your host system, e.g. using sudo apt-get install paprefs on an Ubuntu machine.
Launch PulseAudio Preferences, go to the "Network Server" tab, and check the "Enable network access to local sound devices" checkbox [1]
Restart your computer. (Only restarting Pulseaudio didn't work for me on Ubuntu 14.10)
Install Pulseaudio in your container, e.g. sudo apt-get install -y pulseaudio
In your container, run export "PULSE_SERVER=tcp:<host IP address>:<host Pulseaudio port>". For example, export "PULSE_SERVER=tcp:172.16.86.13:4713" [2]. You can find out your IP address using ifconfig and the Pulseaudio port using pax11publish [1].
That's it. Step 5 should probably be automated if the IP address and Pulseaudio port are subject to change. Additionally, I'm not sure if Docker permanently stores environment variables like PULSE_SERVER: If it doesn't then you have to initialize it after each container start.
Suggestions to make my approach even better would be greatly appreciated, since I'm currently working on a similar problem as the OP.
References:
[1] https://github.com/jlund/docker-chrome-pulseaudio
[2] https://github.com/jlund/docker-chrome-pulseaudio/blob/master/Dockerfile
UPDATE (and probably the better solution):
This also works using a Unix socket instead of a TCP socket:
Start the container with -v /run/user/$UID/pulse/native:/path/to/pulseaudio/socket
In the container, run export "PULSE_SERVER=unix:/path/to/pulseaudio/socket"
The /path/to/pulseaudio/socket can be anything, for testing purposes I used /home/user/pulse.
Maybe it will even work with the same path as on the host (taking care of the $UID part) as the default socket, this way the ultimate solution would be -v /run/user/$UID/pulse/native:/run/user/<UID in container>/pulse; I haven't tested this however.
After trying most of the solutions described here I found only PulseAudio over network to be really working. However you can make it safe by keeping the authentication.
Install paprefs (on host machine):
$ apt-get install paprefs
Launch paprefs (PulseAudio Preferences) > Network Server > [X] Enable network access to local sound devices.
Restart PulseAudio:
$ service pulseaudio restart
Check it worked or restart machine:
$ (pax11publish || xprop -root PULSE_SERVER) | grep -Eo 'tcp:[^ ]*'
tcp:myhostname:4713
Now use that socket:
$ docker run \
-e PULSE_SERVER=tcp:$(hostname -i):4713 \
-e PULSE_COOKIE=/run/pulse/cookie \
-v ~/.config/pulse/cookie:/run/pulse/cookie \
...
Check that the user running inside the container has access to the cookie file ~/.config/pulse/cookie.
To test it works:
$ apt-get install mplayer
$ mplayer /usr/share/sounds/alsa/Front_Right.wav
For more info may check Docker Mopidy project.
Assuming pulseaudio is installed on host and in image, one can provide pulseaudio sound over tcp with only a few steps. pulseaudio does not need to be restarted, and no configuration has to be done on host or in image either. This way it is included in x11docker, without the need of VNC or SSH:
First, find a free tcp port:
read LOWERPORT UPPERPORT < /proc/sys/net/ipv4/ip_local_port_range
while : ; do
PULSE_PORT="`shuf -i $LOWERPORT-$UPPERPORT -n 1`"
ss -lpn | grep -q ":$PULSE_PORT " || break
done
Get ip adress of docker daemon. I always find it being 172.17.42.1/16
ip -4 -o a | grep docker0 | awk '{print $4}'
Load pulseaudio tcp module, authenticate connection to docker ip:
PULSE_MODULE_ID=$(pactl load-module module-native-protocol-tcp port=$PULSE_PORT auth-ip-acl=172.17.42.1/16)
On docker run, create environment variable PULSE_SERVER
docker run -e PULSE_SERVER=tcp:172.17.42.1:$PULSE_PORT yourimage
Afterwards, unload tcp module. (Note: for unknown reasons, unloading this module can stop pulseaudio daemon on host):
pactl unload-module $PULSE_MODULE_ID
Edit: How-To for ALSA and Pulseaudio in container
I managed to dockerize a Java game in the following ways, effectively passing through the game's sound.
This approach requires building an image, making sure the app has all the dependencies it'll need, in this case, pulseaudio and x11. If you're sure your images has everything it needs, you may procees as stated in the previous answers.
Here, we need to build the image, then we can actually launch it.
docker build -t my-unciv-image . # Run from directory where Dockerfile is
docker run --name unciv # image name\
--device /dev/dri \
-e DISPLAY=$DISPLAY \
-e PULSE_SERVER=unix:/run/user/1000/pulse/native \
--privileged \
-u $(id -u):$(id -g) \
-v /path/to/Unciv:/App \
-v /run/user/$(id -u)/pulse:/run/user/(id -u)/pulse \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-w /App \
my-unciv-image \
java -jar /App/Unciv.jar
In the second command the following is specified:
--name: a name is given to the container
--device: video device*
-e: required environment vars
DISPLAY: the display number
PULSE_SERVER: PulseAudio audio server socket
--privileged: run ip privileged*, so it can access all devices
-v: Mounted volumes:
Path to the game mounted into /App in the container**
Audio server socke
Display server socket
-w: Working directory
Here is a docker-compose.yml version of it:
# docker-compose.yml
version: '3'
services:
unciv:
build: .
container_name: unciv
devices:
- /dev/dri:/dev/dri # * Either this
entrypoint: java -jar /App/Unciv.jar
environment:
- DISPLAY=$DISPLAY
- PULSE_SERVER=unix:/run/user/1000/pulse/native
privileged: true # * or this
user: 1000:1000
volumes:
- /path/to/game/:/App
- /run/user/1000/pulse:/run/user/1000/pulse
- /tmp/.X11-unix:/tmp/.X11-unix
working_dir: /App
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install openjdk-11-jre -y
RUN apt-get install -y xserver-xorg-video-all
RUN apt-get install -y libgl1-mesa-glx libgl1-mesa-dri
RUN apt-get install -y pulseaudio
USER unciv
Notes:
*Only required for a game or anything that uses openGL. Either passing the devices explicitly or running it as privileged, but I think it's enough to pass the device, making it privileged may be overkill.
**This math may be bundled with the docker image, but for a demo.
For the audio, it's required to pass env variable PULSE_SERVER and mounting the pulseaudio socket

Resources