Why does ir-keytable quit? - raspberry-pi4

To set IR on my clean Raspberry Pi 4, I am following the instructions at https://learn.pi-supply.com/make/ir-remote-control-support-on-raspbian-buster-justboom/.
One step at the beginning, even before installing lirc package, is to install and run ir-keytable:
$ sudo apt install ir-keytable
$ sudo ir-keytable
This command shows that my driver is gpio_ir_recv and the device is /dev/lirc1. Perfect.
Next, I run the command to test receiving IR signals:
$ sudo ir-keytable -c -p all -t
However, this command quits as soon as it is run. There are no new messages in /var/log/ folder either.
Wondering why this command would simply quit.
Note that lirc package as not yet been installed. As per the article, this step comes later.

Related

Install systemd service on Debian installation

I'm building custom Debian ISO with simple-cdd utility. It worked well till the moment when I attached my own .deb package.
build-simple-cdd --dist stretch --profiles moj --force-root --local-packages /root/iso/deb
build-simple-cdd works properly, because I saw my deb package in tmp directory structure and iso image is created successfully. However debian installation fails
I suspect, that postinst script fails, since it uses systemctl command when it may be unavailable.
#!/bin/sh
set -e
echo $1
if [ "$1" = "configure" ]; then
echo "Configuring privileges..."
chown user:user /usr/bin/Koncentrator
chmod 0755 /usr/bin/Koncentrator
echo "Enabling Koncentrator services..."
systemctl daemon-reload
systemctl enable Xvfb.service
systemctl enable Koncentrator.service
fi
I've added systemd dependency to control file, but it doesn't work.
I made workaround for this issue. simple-cdd allows to prepare post installation script. apt install is called there without problems. Two steps are required to use this solution:
Add deb package to installation disk. This is configured via profile configuration file (moj.conf):
all_extras="$all_extras /root/iso/files/customapackage_0.1.3.deb"
Run apt install in moj.postinst script:
#!/bin/sh
mount /dev/cdrom /media/cdrom
cd /media/cdrom/simple-cdd
apt install ./custompackage_0.1.3.deb
cd /
sync
umount /media/cdrom
If you want to debug your postinst script, you can insert there long sleep:
#!/bin/sh
sleep 10000000
...
And switch terminal (Ctrl+Alt+F1-6) during finish-install phase. Than call chroot /target to switch in-target environemnent

wi-fi disconnected during crouton installation

I purchased an Asus c300m with the soul aim of developing my linux skills
I followed the instruction to boot in developer mode and execute the following command to start downloading downloading crouton/ubuntu on it
sudo sh -e ~/Downloads/crouton -t xfce
it was going well until my wifi disconnected temporary and i got the following error:
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
Failed to complete chroot setup
Unmounting /mtn.stateful_partition/crouton/chroots/precise..
Then I tried to run the sudo command again but I got the following:
/usr/local/chroots/precise already has stuff in it!
Either delete it, specify a different name (-n) or specify -u to update it
However, I'm not sure how to modify the command so i can resume installation or restart it.
You could try sudo sh ~/Downloads/crouton -u -n xfce but it's unlikely that will work. That's from the Crouton docs.
The best approach, since you never finished installing and therefore don't need to recover any data, just delete the install directory and start again. There is no good way for crouton the pick up where it left off.
Also during the install you don't want the Chromebook going to sleep. There is no way in the built-in ChromeOS settings to prevent that but according to this article you can go to the Chrome Web Store and install Keep Awake from Google.
This gives a cool icon in the upper right of the Chrome browser showing a sun, a sunset or a moon depending on what settings you want. Before you start your next install, click it to the sun so the machine won't sleep.
I had the same problem. I ran this and it's downloading:
sudo sh ~/Downloads/crouton -e -t xfce -u
I'm not sure if it's where i left off but it is definitely downloading and reinstalling after the interrupted connection.
Just type in sudo start[desktop environment]
and press y. It should keep going.

Luarocks error on ubuntu

I am trying to run Neuraltalk2 on Ubuntu. But I am getting an error as follows:
parag#parag:~/torch$ sudo luarocks install nn
[sudo] password for parag:
Error: No results matching query were found.
I followed the following steps uptill now:
sudo curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
sudo git clone https://github.com/torch/distro.git ~/torch --recursive
sudo cd ~/torch;
sudo ./install.sh
sudo source ~/.bashrc
Please help!
Try running this all without sudo. The last line, especially, sudo source ~/.bashrc does not work because source is meant to operate on the shell you are currently running. If you run it with sudo, it will load .bashrc into the temporary subshell created by sudo (in practice having no effect).
Your error message indicates that luarocks was installed correctly, but it failed to find the rock. Make sure the name of the rock is correct, try searching it with the luarocks search command, and check your configuration running luarocks with no arguments (it will display the name of your config files in use, helping you to troubleshoot the issue).

Unable to automate script: sudo: no tty present and no askpass program specified

I would like to automate a build - for now, during my development, so no security stuff involved.
I have created a script that moves libs to /usr/local/lib and issues ldd command.
These things require sudo.
Running the script from the builder (Qt Creator), I am not prompted to enter my sudo password, and I get the error
sudo: no tty present and no askpass program specified
Sorry, try again.
I have found a few solutions to this but it just did not work... what am I missing ?
Exact code:
in myLib.pro
#temporary to make my life easier
QMAKE_POST_LINK = /home/me/move_libs_script
in move_libs_script:
#!/bin/bash
sudo cp $HOME/myLib/myLib.so.1 /usr/local/lib/
sudo ldconfig
I did as suggested by the answer above: edited visudo and added the script... even added qmake...
sudo visudo
added at the end:
me ALL=NOPASSWD: /home/me/move_libs_script, /usr/bin/qmake-qt4
It saved file: /etc/sudoers.tmp (and doing the command sudo visudo again I saw that my changes were kept so I am not sure what is with the tmp)
Still same errors
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Edit: after asking the question I found a suggested similar question:
https://stackoverflow.com/a/10668693/1217150
So I tried to add a custom step...
Result:
09:50:03: Running build steps for project myLib...
09:50:03: Could not start process "ssh-askpass Sudo Password | sudo -S bash ./move_libs_script"
Error while building project myLib (target: Desktop)
When executing build step 'Custom Process Step'
(if I run from terminal I get asked for password)
New edit: so I thought I can outsmart the system and call a script that calls my script...
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
got it !!! I will post as answer i guess
New edit as answer to comment:
in mainExe.pro:
QMAKE_POST_LINK = ./link_lib
in link_lib:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
Result: Executable fails because the lib is not to be found (of course before testing I emoved the copy from /usr/local/lib)
Solution 1:
To run the command I needed with sudo from Qt, and be asked for password
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
move_libs_script:
#!/bin/bash
cp $HOME/myLib/myLib.so.1 /usr/local/lib/
ldconfig
Solution 2
Avoid using sudo: run a custom executable from Qt Creator (just like I would when project is deployed) - I can execute my program with correct dependency. Unfortunately it does not work for debug.
In the Run Configuration, place a script "dummy_executable" instead the app executable:
dummy_executable:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
$HOME/myExec/myExec "$#
This method is great for running the program ... unfortunately gdb fails... I think it tries to debug the shell script ? So it won't work if I try to step through my program.
Choosing Solution 1, because it has advantages: allows me to build and debug the program, which is what I really need, and with the dependencies set I get the library built before running the program - which is perfect.
Thank you Etan Reisner

EC2 startup script gets stuck on wget

I have the following script for some number crunching
#!/bin/bash
sudo apt-get update -y
sudo apt-get upgrade -y
sudo apt-get install -y r-base r-base-dev htop s3cmd p7zip-full
wget https://s3.amazonaws.com/#######/###.7z
7z e ###.7z
sudo R CMD BATCH --slave --no-timing --vanilla "--args 0 1 100 200 500 2" SOME-ROUTINE.R
s3cmd put *.results s3://#########/
on EC2. I upload the script as file at the Launch Instance->Instance Details->User Data
The machine fires up, updates and upgrades but then it does not execute wget and does not download the file. When i SSH in the Instance and run the exact same commands the process completes without problems.
Any ideas why wget does not work?
Any other alternatives?
EC
It is always a bit of guessing, but here is how I would debug this:
My first suggestion would be to check for special characters in the S3 URL. This might cause the wget call to fail.
Second, I would give an explicit output path to wget with the -O option. While you are editing the command, you can also add -o to output logging information.
Last step is to check your access rights to the S3 bucket. Perhaps you can try to put the file on another webspace to see if the command executes then.

Resources