my actual problem is, that every time i want to access my serial interface (Arduino), the system returns Permission denied .
root#laptop:/home/user #> cu -l /dev/ttyACM0 -s 115200
/usr/bin/cu: open (/dev/ttyACM0): Permission denied
/usr/bin/cu: /dev/ttyACM0: Line in Use
root#laptop:/home/user #> ls -la /dev/ttyACM*
crw-rw---- 1 root dialout 166, 0 Mär 14 10:37 /dev/ttyACM0
crw-rw---- 1 root dialout 166, 0 Mär 14 10:37 /dev/ttyACM1
crw-rw---- 1 root dialout 166, 0 Mär 14 10:37 /dev/ttyACM2
crw-rw---- 1 root dialout 166, 0 Mär 14 10:37 /dev/ttyACM3
what is another location to seek for the reason of this error?
Thanks for any advice!
I have never used Arduino, so I'll suppose your method is right. First thing I would try is sudoing the first command:
sudo cu -l /dev/ttyACM0 -s 115200
But, since the second message is Line in Use it might also be that the /dev/ttyACM0 is already actually taken/locked. In other words, is there any process using the port? I can't test it on a serial port, but I'd try piping the output of list open files command to grep command:
lsof | grep ACM
It should list the process identifer of the process which locked upon the port. Then you can use the kill command to stop that process:
kill <PID_FROM_OUTPUT_OF_UPPER_COMMAND>
To verify that you succesfully stopped the process you can pipe the output of list all active processes command to the grep command:
ps x | grep <PID_FROM_OUTPUT_OF_UPPER_COMMAND>
which should return no output if the process was successfully stopped. If not, it will ouput that line, so you can try with the -9 flag like this:
kill -9 <PID_FROM_OUTPUT_OF_UPPER_COMMAND>
and it will eventually stop.
Without testing, I'm not sure will the lsof command written in the current form list the taken tty devices. If that is the case then there must be some flag combination which will list them, since everything in Unix is a file.
So, the principle must be valid: find out which process is using the device and stop it (the ps and kill commands will work once you have the right process identifier).
If all of the above is not the case, then probably your method is wrong. In that case, I'd start by carefully rereading the Arduino documentation again :)
As HappyHacking mentioned you need to execute the following command:
sudo adduser [user] dialout
Then logout of the user and log back in.
I created new file in /etc/udev/rules.d/51-arduino.rule with following content:
SUBSYSTEMS=="usb", KERNEL=="ttyACM0", ATTRS{idVendor}=="2341", ATTRS{idProduct}=="0043", GROUP="dialout", MODE="0666"
Be careful to set up idVendor and idProduct properly. After reboot the device privileges are set.
Related
There are 2 pi in this setup:
- PI-domo: running domoticz
- PI-pump: controlling a pump with one GPIO
Those pi are far away, but can communicate through network. PI-domo has some passwordless ssh login setup to pi-pump, and contains three scripts:
- pump_on.sh: sends value to gpio with ssh to turn pump on and returns 1
`ssh pi#pi-pump -n "echo 0 > /sys/class/gpio/gpio18/value" && echo 1`
pump_off.sh: sends value to gpio with ssh to turn pump off and returns 0
ssh pi#pi-pump -n "echo 1 > /sys/class/gpio/gpio18/value" && echo 0
pump_status.sh: returns 1 if pump is on, 0 if pump is off.
All three scripts work as expected when launched in bash, but I can not find how to call them with domoticz. I created a virtual switch and set those as script:///.....[on off].sh but domoticz doesn't seem to be running any of them. nor could I find a place to read the status...
Any idea or link to a RECENT (working) tutorial would be welcome!
Found the issue: stupid me.
It turns out domoticz process was running as root and root didn't have the key setup for passwordless ssh.
I know that this is a old thread and it is answered already, but I have stumbled on the same issue and found that online answers lacked detail. So, here it goes:
On PI-domo run
sudo su to become root
Generate a new key using ssh-keygen -t rsa -b 4096 -C "nameofyourkey"
Copy your key to PI-pump by using ssh-copy-id -i /root/.ssh/yourkey.pub pi#pi-pump
ssh to pi-pump to test that ssh agent for root is working, and if all is well exit and go back to become a pi user.
Note 1: Although logging in as root of PI-domo, it is critical that pump_off and pump_status.sh contain pi#pi-pump and not root#pi-pump or this approach will fail.
Note 2: Domoticz log indicates that the above process has some error by outputting Error: Error executing script command (/home/pi/domoticz/scripts/pump_off.sh). returned: 65280. Note the 65280 error in particular
I have an application on the isolated machine. It writes logs to /var/log/app/log.txt for example. However, I want it to write logs to journald daemon. However, I can't change the way application run, because it is encapsulated.
I mean I can not do smth like app | systemd-cat
1) Am I right that all services started with systemd write logs to journald?
2) If so, will the children of process, started by systemd, will also write logs to journald?
3) Is there any way to tell journald to take logs from a specific file?
4) If not, are there any workarounds?
warning: this is not tested
You could mount bind /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/sbin/mount --bind /dev/stdout /var/log/app/log.txt
Or soft link /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/bin/ln -s /dev/stdout /var/log/app/log.txt
4) I can only try to help with a workaround:
MY_LOG_FILE=/var/log/app/log.txt
# Create a FIFO PIPE
PIPE=/tmp/my_fifo_pipe
mkfifo $PIPE
MY_IDENTIFIER="my_app_name" # just a label for later searching in journalctl
# Start logging to journal
systemd-cat -t $MY_IDENTIFIER -p info < $PIPE &
exec 3>$PIPE
tail -f $MY_LOG_FILE > $PIPE &
exec 3>&- #closing file descriptor 3 closes the fifo
This is the basic idea, you should now think about timings, when it's needed to have this started and when to be stopped.
On Oracle Solaris 11 console when ps -ef | grep java command is issued I can see running some java process PID, which was started on other console window and then it (console window) was closed (.jar application output then was visible). Is it some way to grab again that application output without restarting .jar file?
Application was started like this (as a root user):
java -jar SomeFile.jar &
Write output to file is not an option in this case.
Yes, you can do that, but it involves some mad skills with gdb. Here is how to do that in Linux and I believe you can do the same in Solaris (since it has gdb and it has all needed system calls I'm gonna use further).
There are 3 file descriptors for standard streams:
stdin: 0
stdout: 1
stderr: 2
You are interested in stdout and stderr (both are console output), so you need file descriptors with numbers 1 and 2, just keep it in mind.
Now I'm gonna show you how to do what you ask for "okular" application (instead of your "java" application) for stderr stream.
Run "okular" in terminal, like this:
$ okular &
and then close this terminal. This is just to simulate your situation.
Open another terminal
Look for "okular" process:
$ ps aux | grep okular
Output:
joe 27599 2.2 0.9 515644 73944 ? S 23:46 0:00 okular
So "okular" PID is 27599.
Look for open file descriptors of "okular" process:
$ ls -l /proc/27599/fd
Output:
lrwx------ 1 joe joe 64 Feb 18 23:46 0 -> /dev/pts/0 (deleted)
lrwx------ 1 joe joe 64 Feb 18 23:46 1 -> /dev/pts/0 (deleted)
lrwx------ 1 joe joe 64 Feb 18 23:46 2 -> /dev/pts/0 (deleted)
You see that all 3 streams are deleted.
Now let's attach to our process with gdb:
$ gdb -p 27599 /usr/bin/okular
Inside of gdb perform next operations:
(gdb) p close(2)
(gdb) p creat("/tmp/okular_2", 0600)
(gdb) detach
(gdb) quit
Here we invoked 2 system calls:
close(), to close file for stderr stream of our process
creat(), to create new file for stderr stream of our process
p is gdb command, it prints (in our case) system calls return values.
Now all new stderr output of our process will be appended to text file /tmp/okular_2. We can read it constantly this way:
$ tail -f /tmp/okular_2
Conclusion
Ok, that's it, we revived stderr stream. You can do the same for stdout stream, the only difference is that you need to call "close(1)" instead of "close(2)" in gdb. Also, in your case be sure to replace all "okular" words with your "java" word.
The most of answer was inspired by this article.
If you need to revive stdin stream, you can attach it to pipe (FIFO) file, see details here.
Yes, it is possible to snoop any process output with Solaris native tools.
One way would be using dtrace which allows tracing processes even when they are already grabbed by a debugger or similar tool.
This dtrace script will display a given process stdout:
#!/bin/ksh
pid=$1
dtrace -qn "syscall::write:entry /pid == $pid && arg0 == 1 /
{ printf(\"%s\",copyinstr(arg1)); }"
You should should pass the process id of the java application to trace as its first argument, eg. $(pgrep -f "java -jar SomeFile.jar").
Replace arg0 == 1 by arg0 == 2 if you want to trace stderr vs stdin.
Should you want to see non displayable characters (in octal), you might use this slightly modified version:
#!/bin/ksh
pid=$1
dtrace -qn "syscall::write:entry /pid == $pid && arg0 == 1 /
{ printf(\"%s\",copyinstr(arg1)); }" | od -c
Another native way is to use the truss command. The following script will show all writes from your process to any file descriptors, and will include a full detailed trace for both stdout and stderr (3799 is your target process pid):
truss -w1,2 -t write -p 3799
dtrace:
http://docs.oracle.com/cd/E18752_01/html/819-5488/gcgkk.html
truss:
http://docs.oracle.com/cd/E36784_01/html/E36870/truss-1.html#scrolltoc
I need to identify a daemon process that is writing to a log file periodically. The problem is that I dont have any idea which process is doing the job, and I need to show some progress to the client by tomorrow. Anybody has any clue?
I have already sorted out the daemon processes running in the system with the help of the PPID. Any help would be appreciated.
Also I think it is possible (rarely) for a daemon not to have a PPID as 1. How can we find it out then?
Try the fuser command on your log file, which will display the PIDs of processes using it.
Example:
$ fuser file.log
file.log: 3065
lsof gives a list of open files with the processes.
So lsof | grep <filename> should help you.
You can use auditctl.
# sudo apt-get install auditd
# sudo /sbin/auditctl -w /path/to/file -p war -k hosts-file
-w watch /etc/hosts
-p warx watch for write, attribute change, execute or read events
-k hosts-file is a search key.
# sudo /sbin/ausearch -f /path/to/file | more
Gives output such as
type=UNKNOWN[1327] msg=audit(1459766547.822:130): proctitle=2F7573722F7362696E2F61706163686532002D6B007374617274
type=PATH msg=audit(1459766547.822:130): item=0 name="/path/to/file" inode=141561 dev=08:00 mode=0100444 ouid=33 ogid=33 rdev=00:00 nametype=NORMAL
type=CWD msg=audit(1459766547.822:130): cwd="/"
type=SYSCALL msg=audit(1459766547.822:130): arch=c000003e syscall=2 success=yes exit=41 a0=7f3c23034cd0 a1=80000 a2=1b6 a3=8 items=1 ppid=24452 pid=6797 auid=42949672
95 uid=33 gid=33 euid=33 suid=33 fsuid=33 egid=33 sgid=33 fsgid=33 tty=(none) ses=4294967295 comm="apache2" exe="/usr/sbin/apache2" key="hosts-file"
I stuck on a small problem.
I'm launching many bsub commands at the same time each one on a specified host:
bsub -sp 20 -W 0:5 -m $myhostname -q "myQueue" -J "mkdir_script" -o $log_file "script_to_launch param1 param2 param3"
all this inside a for, for each hostName.
The problem is that everything is OK for all hosts except one (always the same one). The job is always in PENDING state, and is not moving to RUN state.
The script to execute is a script that will check for a folder and creating it if is not there (so a very small task to do).
Is there a way to see what happens on that host and why my job is not going to RUN state ?
PS: I just found the bjobs -p command and I have the following message:
Not specified in job submission: 81 hosts;
Closed by LSF administrator: 3 hosts;
What does this message mean?
The -m option limits you to a particular host, which excludes 81 hosts. The other three have been closed by your system administrator. You would have to contact them to find out why.