Helo guys ! Can we stop a dns server or starts the dhcp service by using the server asterisk ?? (When the user calls the server he'll receive a response of a part of the server "press button 1 to stop the dns service,2 to restart the server dns".
Sure, you can do that using system comand in dialplan. Or using AGI script.
Please note, asterisk's SIP protocol may not work while no DNS server(new calls).
System()
Execute a system (Linux shell) command Description
System(command) – System command alone System(command arg1 arg2 etc) – Pass in some arguments System(command|args) – Use the standard asterisk syntax to pass in arguments Technical Info
Executes a command by using system(). System() passes the string unaltered to system(3). Running “man 3 system” will show exactly what system(3) does:
system() executes a command specified in string by calling /bin/sh -c string, and returns after the command has been completed.
Therefore System(command arg1 arg2 etc) can be used to pass along arguments. Return codes
System(command): Executes a command by using system(). If the command fails, the console should report a fallthrough. If you need to return a specific value (string) to the dialplan then use either AGI or Asterisk func shell as introduced in Asterisk 1.6.0.
Result of execution is returned in the SYSTEMSTATUS channel variable:
FAILURE Could not execute the specified command SUCCESS Specified command successfully executed APPERROR Triggered for example when you try to delete a file but the file was not there. NOTE – not documented, but can also return APPERROR NOTE – I don’t seem to be able to create a situation when FAILURE will be returned.
Related
Below is a simple example of what I'm trying to accomplish. I'm trying to force an ssh script to not wait for all child processes to exit before returning. The purpose is to launch a daemon process on a remote host via ssh.
test.sh
#!/bin/bash
(
sleep 2
echo "done"
) &
When I run the script on the console it returns immediately, with "done" appearing 2 seconds later.
When I run the script as an ssh script, the ssh command . It appears to wait until all child processes have terminated until ssh exits.
ssh example
$ ssh mike#127.0.0.1 /home/mike/test.sh
(2 seconds)
done
standard terminal example
$ ./test.sh
$
(2 seconds)
done
How can I make ssh return when the parent/main process has terminated?
EDIT:
I'm aware of the -f option to ssh to run the process in the background . It leaves the ssh process and connection open on the source host. For my purposes this is unsuitable.
ssh mike#127.0.0.1 /home/mike/test.sh
When you run ssh in this fashion, the remote ssh server will create a set of pipes (or socketpairs) which become the standard input, output, and error for the process which you requested it to run, in this case the script process. The ssh server doesn't end the session based on when the script process exits. Instead, it ends the session when it reads and end-of-file indication on the script process's standard output and standard error.
In your case, the script process creates a child process which inherits the script's standard input, output, and error. A pipe (or socketpair) only returns EOF when all possible writers have exited or closed their end of the pipe. As long as the child process is running and has a copy of the standard output/error file descriptors, the ssh server won't read an EOF indication on those descriptors and it won't close the session.
You can get around this by redirecting standard input and standard output in the command that you pass to the remote server:
ssh mike#127.0.0.1 '/home/mike/test.sh > /dev/null 2>&1'
(note the quotes are important)
This avoids passing the standard output and standard error created by the ssh server to the script process or the subprocesses that it creates.
Alternately, you could add a redirection to the script:
#!/bin/bash
(
exec > /dev/null 2>&1
sleep 2
echo "done"
) &
This causes the script's child process to close its copies of the original standard output and standard error.
Problem
Have a multi_exec.pl that shall handle timed-out execution of command provided.
And we call this multi_exec.pl at various places in our legacy applciation.
Sample call :
$grab = `multi_exec.pl -1 'bcp_cmd-1' 'bcp_cmd-2' ... 'bcp_cmd-n'`
want to understand how to achieve the below using STDOUT[ERR] re-directions
capture bcp STDOUT[ERR] of individual BCP commands on the terminal
while need to capture failure messages on STDERR from multi_exec.pl
STDOUT of multi_exec.pl needs to go to /dev/null ( don't want to capture STDOUT
)
while need to capture failure messages on STDERR from multi_exec.pl
Nothing special needs to be done for this - STDERR of the parent script as well as the individual commands will go to the terminal by default
STDOUT of multi_exec.pl needs to go to /dev/null (don't want to capture STDOUT)
capture bcp STDOUT[ERR] of individual BCP commands on the terminal
These are conflicting requirements because STDOUT of the parent script as well as the individual bcp commands will end up on the terminal by default. There is no way to bifurcate just one of them to be sent to /dev/null. You could modify multi_exec.pl so that it writes its own output to a file, if specified. If no file is specified, it shouldn't write anything to stdout at all. So, it is ensured the STDOUT of multi_exec.pl is always from bcp commands.
This is surprisingly hard to search for.
The only thing I can find, are TRAP* functions, which can be triggered via various signals.
But I really want to watch all stdout/stderr, and have a function trigger if a certain string is matched.
(example: refreshing kerberos credentials. A command fails and emits a standard error message indicating I need to authenticate. I want to automatically run the command to do so ;)
The shell doesn't see a command's stdout/stderr if not piped to the shell. So, you need to redirect stdout/stderr to your zsh function. But you can also send them to both your zsh function and somewhere else. For instance:
your_command 2>&1 | tee >(your_zsh_function)
or
your_command |& tee >(your_zsh_function)
or
your_command >>(your_zsh_function) >>/dev/tty 2>&1
your_zsh_function will grep its input for a string match. A drawback is that you may have buffering problems.
But concerning your example, if I understand correctly, you may want to use the expect utility: "programmed dialogue with interactive programs".
I am developing an application that can establish a server-client connection using QTcp*
The client sends the server a number.
The received string is checked on its length and quality (is it really a number?)
If everything is OK, then the server replies back with a file path (which depends on the sent number).
The client checks if the file exists and if it is a valid image. If the file complies with the rules, it executes a command on the file.
What security concerns exist on this type of connection?
The program is designed for Linux systems and the external command on the image file is executed using QProcess. If the string sent contained something like (do not run the following command):
; rm -rf /
then it would be blocked on the file not found security check (because it isn't a file path). If there wasn't any check about the validity of the sent string then the following command would be executed:
command_to_run_on_image ; rm -rf /
which would cause panic! But this cannot happen.
So, is there anything I should take into consideration?
If you open a console and type command ; rm -rf /*, something bad would likely happen. It's because commands are processed by the shell. It parses text output, e.g. splits commands by ; delimiter and splits arguments by space, then it executes parsed commands with parsed arguments using system API.
However, when you use process->start("command", QStringList() << "; rm -rf /*");, there is no such danger. QProcess will not execute shell. It will execute command directly using system API. The result will be similar to running command "; rm -rf /*" in the shell.
So, you can be sure that only your command will be executed and the parameter will be passed to it as it is. The only danger is the possibility for an attacker to call the command with any file path he could construct. Consequences depends on what the command does.
Here is the scenario,
$hostname
server1
I have the below script in server1,
#!/bin/ksh
echo "Enter server name:"
read server
rsh -n ${server} -l mquser "/opt/hd/ca/scripts/envscripts.ksh"
qdisplay
# script ends.
In above script I am logging into another server say server2 and executing the script "envscripts.ksh" which sets few alias(Alias "qdisplay") defined in it.
I can able to successfully login to server1 but unable to use the alias set by script "envscripts.ksh".
Geting below error,
-bash: qdisplay: command not found
can some please point out what needs to be corrected here.
Thanks,
Vignesh
The other responses and comments are correct. Your rsh command needs to execute both the ksh script and the subsequent command in the same invocation. However, I thought I'd offer an additional suggestion.
It appears that you are writing custom instrumentation for WebSphere MQ. Your approach is to remote shell to the WMQ server and execute a command to display queue attributes (probably depth).
The objective of writing your own instrumentation is admirable, however attempting to do it as remote shell is not an optimal approach. It requires you to maintain a library of scripts on each MQ server and in some cases to maintain these scripts in different languages.
I would suggest that a MUCH better approach is to use the MQSC client available in SupportPac MO72. This allows you to write the scripts once, and then execute them from a central server. Since the MQSC commands are all done via MQ client, the same script handles Windows, UNIX, Linux, iSeries, etc.
For example, you could write a script that remotely queried queue depths and printed a list of all queues with depth > 0. You could then either execute this script directly against a given queue manager or write a script to iterate through a list of queue managers and collect the same report for the entire network. Since the scripts are all running on the one central server, you do not have to worry about getting $PATH right, differences in commands like tr or grep, where ksh or perl are installed, etc., etc.
Ten years ago I wrote the scripts you are working on when my WMQ network was small. When the network got bigger, these platform differences ate me alive and I was unable to keep the automation up and running. When I switched to using WMQ client and had only one set of scripts I was able to keep it maintained with far less time and effort.
The following script assumes that the QMgr name is the same as the host name except in UPPER CASE. You could instead pass QMgr name, hostname, port and channel on the command line to make the script useful where QMgr names do not match the host name.
#!/usr/bin/perl -w
#-------------------------------------------------------------------------------
# mqsc.pl
#
# Wrapper for M072 SupportPac mqsc executable
# Supply parm file name on command line and host names via STDIN.
# Program attempts to connect to hostname on SYSTEM.AUTO.SVRCONN and port 1414
# redirecting parm file into mqsc.
#
# Intended usage is...
#
# mqsc.pl parmfile.mqsc
# host1
# host2
#
# -- or --
#
# mqsc.pl parmfile.mqsc < nodelist
#
# -- or --
#
# cat nodelist | mqsc.pl parmfile.mqsc
#
#-------------------------------------------------------------------------------
use strict;
$SIG{ALRM} = sub { die "timeout" };
$ENV{PATH} =~ s/:$//;
my $File = shift;
die "No mqsc parm file name supplied!" unless $File;
die "File '$File' does not exist!\n" unless -e $File;
while () {
my #Results;
chomp;
next if /^\s*[#*]/; # Allow comments using # or *
s/^\s+//; # Delete leading whitespace
s/\s+$//; # Delete trailing whitespace
# Do not accept hosts with embedded spaces in the name
die "ERROR: Invalid host name '$_'\n" if /\s/;
# Silently skip blank lines
next unless ($_);
my $QMgrName = uc($_);
#----------------------------------------------------------------------------
# Run the parm file in
eval {
alarm(10);
#Results = `mqsc -E -l -h $_ -p detmsg=1,prompt="",width=512 -c SYSTEM.AUTO.SVRCONN &1 | grep -v "^MQSC Ended"`;
};
if ($#) {
if ($# =~ /timeout/) {
print "Timed out connecting to $_\n";
} else {
print "Unexpected error connecting to $_: $!\n";
}
}
alarm(0);
if (#Results) {
print join("\t", #Results, "\n");
}
}
exit;
The parmfile.mqsc is any valid MQSC script. One that gathers all the queue depths looks like this:
DISPLAY QL(*) CURDEPTH
I think the real problem is that the r(o)sh cmd only executes the remote envscripts.ksh file and that your script is then trying to execute qdisplay on your local machine.
You need to 'glue' the two commands together so they are both executed remotely.
EDITED per comment from Gilles (He is correct)
rosh -n ${server} -l mquser ". /opt/hd/ca/scripts/envscripts.ksh ; qdisplay"
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer