I have an unusual situation with my Asterisk.
Firstly the machine is a FreePBX 14, running Asterisk 15.4.0
I have a dialplan which takes card details using an IVR, i.e Enter Card Number followed by the Hash key.
This then puts together a whole string to execute a separate Perl Script that will charge the customers card with the requested amount.
If i run the perl script from the CLI, the script executes fine and charges the card.
If I go through the dialplan, providing the relevant card details, when it gets to the end to execute the script, it all appears to work, but nothing happens in terms of charging the card.
In an effort to see the AGI script running and seeing what's going wrong, I run asterisk as 'asterisk -vvvvvc' as root, and do the same again, the payment goes through, and works completely fine.
This leads me to believe that when running asterisk as (asterisk -vvvc) it runs with elevated permissions allowing the script to run properly.
Any ideas as to how i can have this working normally or what permissions I need to fix.
The script is set to 0777, so should be executable by everything, and I've also set the script to be owned by asterisk AND root, and that made no difference.
Here is the command I am using in the dialplan to invoke the script.
exten=>50000,n,AGI(MakePayment.agi,${CardVar},${ExpMonth},${ExpYear},${SecurityVar},${Value},${TransID})
Although all that does is pass it through to the perl script.
As noted previously, I do not expect there to be an issue with the dialplan nor the script but rather some issue between the two interacting with each other.
Just in case someone else has an issue similar to this in the future, the problem was most likely due to the Stripe (Payment Server) CLI program I was using, it didn't like being run in a script where it was not in a normal terminal, explains why it worked in both scenarios but not in the dialplan.
Related
I'm writing an app that allows an Arduino plugged in via USB to send serial data to the app to graph it in realtime. It can scan available ports for the Arduino and attempt to connect to it, but I'm running into permissions issues whether I use pyserial or QtSerialPort. I have added my user to the groups tty, uucp, and dialout. (When listing ports I see that they belong to uucp.) This doesn't seem to do anything. I can chmod a+rw the port every time the Arduino is plugged in but this is not practical because less technical users (i.e. my kids) need to be able to plug it in and use it via a GUI that I'm writing. I've seen suggestions to run the whole script with sudo but this seems less safe than it needs to be, and also requires typing in the command line.
Is there a way to read from the serial port without resetting permissions every time I plug the USB cable in? Or if not, is there an accepted way to do this from the GUI to make sure the permissions are right before the attempt to connect, without running the whole program as sudo? I'm building this on Linux (Arch) btw.
I solved this using udev rules. I created the rules file, which I called /etc/udev/rules.d/80-arduino.rules, and inside I put the following:
SUBSYSTEMS=="usb", ACTION=="add", DRIVERS=="usb", ATTRS{idProduct}=="0042", ATTRS{idVendor}=="2341", ATTRS{manufacturer}=="Arduino (www.arduino.cc)", ATTRS{serial}=="85734323231351404021", RUN+="/bin/arduino_added.sh", RUN+="/bin/device_added.sh", MODE="0660"
This selects for my specific device by serial number as well as manufacturer (ATTRS{serial}=="85734323231351404021"), runs a little script that writes something to a logfile it creates in /tmp (for debugging), and the MODE="0660" opens the port with permissions to let it be accessed.
I had to mess with it a bit to get it to work. Running sudo udevadm control --reload was enough to get the script to write to the logfile each time it was plugged in, but I had to reboot the computer to get it to work with the permissions for some reason.
I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.
I wanna write a remote console, working like a telnet server. User is able to use telnet to log into the server, then write some commands to do some works.
A good example for this is the console of router os. What I'm confusing right now is, I can accept user's input, do someting then print some texts back, but I wanna use ncurses to make the console has more features(such as "cmd auto-complete", syntax color...), so how can I do that? Because the console is in user side, if the server calls ncurses APIs it'll just change stuffs on server...
Maybe this is a stupid question but I'm really newbie on this. Any suggestions are appreciated.
This is more difficult than you might think.
You need to understand how terminals work - they use special control sequences for e.g. moving the cursor or color output. This is described by a terminfo file which is terminal-specific. Ncurses translates API calls (e.g. move cursor to a certain position) to such control sequences using terminfo.
Since the terminal (nowadays xterm, gnome-terminal, screen, tmux, etc) is on the client side, you have to pass the type of terminal from the client to the server. That's why e.g. ssh passes this information from the ssh client to the server (try echo $TERM in your ssh session - it might be 'linux' if you are logged in via the console, or 'xterm', if you are using X and an xterm). Also, you better have the respective terminfo available on the server.
Another piece of the puzzle is pseudo terminals. As nowadays relatively few people use serial terminals, their semantics are emulated so that applications and libraries (e.g. curses and its friends) originally developed for serial consoles keep working. This is achieved via pseudo terminals - these are like pipes, a master and a slave device communicates, anything written on one side comes out on the other side. For a login process, getty, for example, can just use one side of a pty device and think it's a serial line - your server program must handle the other side of the pty, sending everything it gets from the pty to your client via the network.
Terminal emulators also use ptys, type tty into your terminal, and you'll get something like /dev/pts/9 if you're using a terminal emulator. On the other side of the pty it's usually your shell, communicating with your terminal emulator via the pty.
Your client program can more or less just use standard input and standard output. If your terminal information is correct, the rest will be handled by your terminal emulator, just pass anything you receive from your server program to stdout, and send anything you read from stdin to your server program.
Hopefully I haven't left out any important detail. Good luck!
It is possible to have ncurses operate on streams other than stdin and stdout. Call newterm() before initscr() to set the input and output file handles for ncurses.
But you will need to know what sort of terminal is on the remote end of the connection (ssh and telnet both have mechanisms for communicating this to the server) and you will also want a fall back to a non-ncurses interface in case the remote end is not a supported terminal type (or if you can't determine the terminal type).
When I start a process in background in a terminal and some how if terminal gets closed then we can not interact that process any more. I am not sure but I think process also get killed. Can any one please tell me how can I detach that process from my terminal. So even if I close terminal then I can interact with same process in new terminal ?
I am new to unix so your extra information will help me.
The command you're looking for is disown.
disown <processid>
This is as close as you can get to a nohup. It detaches the process from the current login and allows it to continue running. Thanks David Korn!
http://www2.research.att.com/~gsf/man/man1/disown.html
and I just found reptyr which lets you reparent a disowned process.
https://github.com/nelhage/reptyr
It's already in the packages for ubuntu.
BUT if you haven't started the process yet and you're planning on doing this in the future then the way to go is screen and tmux. I prefer screen.
You might also consider the screen command. It has the "restore my session" functionality. Admittedly I have never used it, and forgot about it.
Starting the process as a daemon, or with nohup might not do everything you want, in terms of re-capturing stdout/stdin.
There's a bunch of examples on the web. On google try, "unix screen command" and "unix screen tutorial":
http://www.thegeekstuff.com/2010/07/screen-command-examples/
GNU Screen: an introduction and beginner's tutorial
First google result for "UNIX demonizing a process":
See the daemon(3) manpage for a short overview. The main thing of daemonizing
is going into the background without quiting or holding anything up. A list of
things a process can do to achieve this:
fork()
setsid()
close/redirect stdin/stdout/stderr to /dev/null, and/or ignore SIGHUP/SIGPIPE.
chdir() to /.
If started as a root process, you also want to do the things you need to be root
for first, and then drop privileges. That is, change effective user to the "daemon"
user or "nobody" with setuid()/setgid(). If you can't drop all privileges and need
root access sometimes, use seteuid() to temporary drop it when not needed.
If you're forking a daemon then also setup child handlers and, if calling exec,
set the close on exec flags on all file descriptors your children won't need.
And here's a HOWTO on creating Unix daemons: http://www.netzmafia.de/skripten/unix/linux-daemon-howto.html
'Interact with' can mean a couple of things.
The reason why a program, started at the command-line, exits when the terminal ends, is because the shell, when it exits, sends that process a HUP signal (see documentation for kill(1) for some introduction; HUP, by the way, is short for 'hang up', and originally indicated that the user had hung up the modem/telephone). The default response to a HUP signal is that a process is terminated – that is, the invoked program exits.
The details are slightly more fiddly, but this is the general intuition.
The nohup command tells the shell to start the program, and to do so in a way that this HUP signal is ignored. That is, the program keeps going after the invoking terminal exits.
You can still interact with this program by sending it signals (see kill(1) again), but this is a very limited sort of interaction, and depends on your program being written to do sensible things when it receives those signals (signals USR1 and USR2 are useful things to trap, if you're into that sort of thing). Alternatively, you can interact via named pipes, or semaphores, or other bits of inter-process communication (IPC). That gets fiddly pretty quickly.
I suspect what you're after, though, is being able to reattach a terminal to the process. That's a rather more complicated process, and applications like screen do suitably complicated things behind the scenes to make that happen.
The nohup thing is a sort of quick-and-dirty daemonisation. The daemon(3) function does the daemonisation 'properly', doing various bits of tidy-up as described in YePhIcK's answer, to comprehensively break the link with the process/terminal that invoked it. You can interact with that daemonised process with the same IPC tools as above, but not straightforwardly with a terminal.
How can we trigger a shell script on an unix server through an email with particular subject?
procmail allows you to act on incoming mails, including filtering and starting external commands.
Some useful links:
general procmail documentation: http://pm-doc.sourceforge.net/doc/
start a shell command as a procmail rule: http://porkmail.org/era/procmail/mini-faq.html#rtfm
Just in case the link goes down, this is the link from the second point from above:
Q: How can I run an arbitrary Perl or shell script on all or selected
incoming mail?
A: Install Procmail. Read the manual pages (there are several). Thank
you.
:0 * conditions, if any | your-script-here
The conditions, in their simplest form, are regular expressions to
match against the header of each incoming mail message. Correction:
Even simpler, you can leave out the condition lines completely if you
want to do your action (in this case, run a shell script)
unconditionally.
More-complicated conditions can also be exit codes of other shell
scripts or programs, or tests against the full body of the message, or
against Procmail variables (Procmail's variables are also exported to
the environment of subprocesses, so they are essentially environment
variables. There are details about this later in this FAQ.)
Actions can also be to save the message to a folder (appended to a
Unix mailbox file, or written to a new file in a directory) or to
forward the message to one or more other addresses. Finally, the
action can be a nested block of more "recipes," as these
condition-action mappings are called in Procmail jargon, to try if the
outer condition is met. The procmailrc(5) manual page has the full
scoop.
Obviously, you are not restricted to Perl or shell scripts. Anything
you can run from a Unix command prompt can be run from Procmail, in
principle, although running interactive programs doesn't usually make
much sense.
More general, but to my mind less useful than Wim's procmail suggestion: You can even just point your .forward at an executable with "|scrip.sh".
You could in theory, by writing a program to monitor/poll the incoming email server and check the subject line using standard POP3 protocol, if the subject line has a particular trigger keywords, invoke the shell script... This is the order of approach that would suit... there may be an open source solution already out there...
Using sockets, connect to the incoming email server by IP and port (usually 25), non-blocking that is not to seize up and chew up CPU time, within a thread looping forever
List the emails using the POP3 protocol
Pull down the headers via POP3 protocol and do a regexp on the subject line
If the regexp matches the subject line, issue a trigger perhaps a system call to invoke the shell script