nginx + ffmpeg (hls => tmpfs): io deadlocks - nginx

Hi all!
ffmpeg \ \ -re \ -stream_loop -1 \ \ -i my-live-input(mpegts:h264,aac) \ -c copy \ \ -f hls -hls_time 5 -hls_list_size 5 -hls_wrap 6 -hls_allow_cache 0 \ my-live-output.m3u8
I have a service for distributing HLS video.
several ffmpeg processes write 5 chunks in a loop, each process to its own folder.
this is a live broadcast and a small number of chunks is needed to reduce latency.
nginx publishes files from these folders via http(s)
tmpfs file system
If I start nginx first, and then encoders, then they mutually block access to files. Deadlock IO occurs.
How would you recommend solving this problem?
Restarting the encoders does not solve the problem.
However, if I restart nginx while the encoders are running, the deadlocks disappear.

Related

Varnish 6.0.8 Secret file is not created

Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish

QEMU serial std output diverges on archlinux guest

I'm trying to bootstrap some installation automation of a freshly downloaded ISO in QEMU. I create a clean img to install to and kick off QEMU like this:
$ qemu-img create -f qcow2 out/main.img 15G
$ qemu-system-x86_64
-m 8G \
-serial stdio \
-cdrom out/linux.iso \
-drive file=out/main.img,if=virtio \
-netdev user,id=net0 \
-device e1000,netdev=net0
and I can see Arch boot up. At first both the display and the terminal are in sync, but they soon diverge after this the GRUB boot up screen.
I'm not sure what piece I'm missing to get this to work. I've seen some people suggest adding -append "root=/dev/sda console=ttyS0" to your QEMU arguments, but (from what I can tell) while it requires you to extract the kernel and the initram from the ISO (which should be easy enough as mounting and copy/pasting the correct files) but it also expects you to already have an installed system on /dev/sda (which is what I'm trying to bootstrap).
At this point I don't know what to search for next, how do I get the full terminal session in my current terminal and not just in my display?
In this case, it was as #Peter Maydell commented; this is not a QEMU question. QEMU was doing exactly what it was supposed to do, but Arch had to be told to utilize the serial console as its primary means of communication.
Two samples of how this can be done
bash via console
pipe_dir="$(mktemp -d)"
mkfifo "${pipe_dir}/pipe.in" "${pipe_dir}/pipe.out"
function cleanup {
rm -rfv "${pipe_dir}"
}
trap cleanup EXIT
qemu-system-x86_64 \
-m 8G \
-display none \
-serial stdio \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/main.img,if=virtio &
sleep 2s
printf "\t" > "${pipe_dir}/pipe.in"
sleep 2s
printf " console=ttyS0,115200" > "${pipe_dir}/pipe.in"
sleep 2s
echo > "${pipe_dir}/pipe.in"
# Whatever other interactions you want go here...
wait
expect via console
set timeout -1
spawn qemu-system-x86_64 \
-m 8G \
-display none \
-serial stdio \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/main.img,if=virtio
sleep 1
send \t
sleep 1
send " console=ttyS0,115200"
sleep 1
send \n
In theory this should be fine, but in practice I still had difficulty interacting with the console and sending characters over to login correctly. I'm sure there is probably more user-error on my part than not.
A better solution (again contextual to Arch and not QEMU specifically) was to use a cloud-init script that included my SSH public key. Interactions with the VM were stable, reliable, and easily reproducible.
bash with cloud-init/ssh
$ touch ./out/meta-data
$ cat > ./out/user-data <<EOF
#cloud-config
users:
- name: root
ssh_authorized_keys:
- $(cat ${HOME}/.ssh/id_ed25519.pub)
EOF
$ xorriso -as genisoimage -output ./out/cloud-init.iso \
-volid CIDATA -joliet -rock ./out/meta-data ./out/user-data
$ qemu-system-x86_64 \
-m 8G \
-drive file=./out/linux.iso,index=0,media=cdrom \
-drive file=./out/cloud-init.iso,index=1,media=cdrom \
-drive file=./out/main.img,if=virtio \
-net user,hostfwd=tcp::10022-:22 \
-net nic &
$ function qemu-ssh {
ssh -q -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o "UserKnownHostsFile /dev/null" -p 10022 root#localhost ${#}
}
$ printf 'Waiting for SSH to go live (this will take a while)...'
$ until qemu-ssh exit; do
printf '.'
done
# This convenience function starts an interactive
# session when supplied with no additional arguments
# but your automation can go here
$ qemu-ssh

rsync to local USB disk gives "rsync error 2 (Protocol incompatibility)

I am using backintime for backup which in turn uses rsync to make snapshots.
Most filesystems on the computer are XFS, including the rsync target,
system is Ubuntu 20.04 with rsync version 3.1.3 protocol version 31.
I get an exit code 2 from rsync which is Protocol incompatibility,
and some digging shows this happens if you are running rsync across some (ssh) connection
between two computers with different rsync versions, or login scripts injecting some unexpected output into the ssh connection. None of that is the case here, this is all local,
see below for the command line.
=> Anymore insights into this rsync error ? How can a local Protocol incompatibility happen
if there is just one /usr/bin/rsync ?
Yours,
Steffen
The local USB disk mounted as
type xfs (rw,nosuid,nodev,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota,uhelper=udisks2)
INFO: Call rsync to take the snapshot
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
WARNING: Command "rsync --recursive --times --devices --specials --hard-links --human-readable \
--links --acls --xattrs --perms --executability --group --owner --info=progress2 \
--no-inc-recursive --delete --delete-excluded -v -i \
--out-format=BACKINTIME: %i %n%L --link-dest=../../20210301-082432-781/backup \
--chmod=Du+wx --exclude=/media/sneumann/LinuxBackup/msbi-corei \
--exclude=/root/.local/share/backintime --exclude=.local/share/backintime/mnt \
--exclude=.gvfs --exclude=.cache* --exclude=[Cc]ache* --exclude=.thumbnails* \
--exclude=[Tt]rash* --exclude=*.backup* --exclude=*~ \
--exclude=/home/sneumann/Ubuntu One --exclude=.dropbox* --exclude=/proc/* \
--exclude=/sys/* --exclude=/dev/* --exclude=/run/* --exclude=/media \
--exclude=/root/.local/share/backintime/takesnapshot_.log \
--exclude=/root/.local/share/backintime --include=/ --include=/** \
--exclude=* / /media/sneumann/LinuxBackup/msbi-corei/backintime/msbi-corei/root/1/new_snapshot/backup"
returns 2

Sometimes HLS chunks does not generate in /tmp/hls directory

I am working on an adaptive HLS solution using Nginx RTMP module as a streaming server and VideoJs as a client. I have completed the setup i.e. NGINX configurations and client sample in VideoJs.
NGINX Configurations:
nginx.txt
I am using this Ffmpeg command to generate stream:
ffmpeg -re -i /home/user/Downloads/test.mp4 -vcodec libx264 -vprofile baseline -g 30 -acodec aac -strict -2 -f flv rtmp://192.168.1.68/live
My problem is that sometimes the Nginx does not generate .ts and .m3u8 files in /tmp/hls directory when I issue the above ffmpeg command. I have also enabled the nginx-rtmp module logs but they are only giving me access information and I am not getting any logs in error logs.
Do let me know if more information is required. Any help will be appreciated.
Thanks,

Wget Hanging, Script Stops

Evening,
I am running a lot of wget commands using xargs
cat urls.txt | xargs -n 1 -P 10 wget -q -t 2 --timeout 10 --dns-timeout 10 --connect-timeout 10 --read-timeout 20
However, once the file has been parsed, some of the wget instances 'hang.' I can still see them in system monitor, and it can take about 2 minutes for them all to complete.
Is there anyway I can specify that the instance should be killed after 10 seconds? I can re-download all the URLs that failed later.
In system monitor, the wget instances are shown as sk_wait_data when they hang. xargs is there as 'do_wait,' but wget seems to be the issue, as once I kill them, my script continues.
I believe this should do it:
wget -v -t 2 --timeout 10
According to the docs:
--timeout: Set the network timeout to seconds seconds. This is equivalent to specifying
--dns-timeout, --connect-timeout, and --read-timeout, all at the same time.
Check the verbose output too and see more of what it's doing.
Also, you can try:
timeout 10 wget -v -t 2
Or you can do what timeout does internally:
( cmdpid=$BASHPID; (sleep 10; kill $cmdpid) & exec wget -v -t 2 )
(As seen in: BASH FAQ entry #68: "How do I run a command, and have it abort (timeout) after N seconds?")
GNU Parallel can download in parallel, and retry the process after a timeout:
cat urls.txt | parallel -j10 --timeout 10 --retries 3 wget -q -t 2
If the time for an url to be fetched changes (e.g. due to faster internet connection), you can let GNU Parallel figure out the timeout:
cat urls.txt | parallel -j10 --timeout 1000% --retries 3 wget -q -t 2
This will make GNU Parallel record the median time for a successful job and set the timeout dynamically to 10 times that.

Resources