I am attempting to kick off a third party program using EXEC command in PeopleSoft. It is returning error code 127. When I kick the program off from Unix command line, I get no error. Does anybody know what code 127 is? Or have a list of all the return codes?
I think it is likely the Unix shell return code, in which case 127 is "command not found".
See http://tldp.org/LDP/abs/html/exitcodes.html
You may need to make sure your Exec call is specifying the correct path, relative or absolute, or that any expected environment variables are available. Possibly test with a simple program to see if calling through Exec is successful at all. On the server it would run under the ID that started the app server, and may be sourced differently than an individual user. If using relative paths I believe it would start in $PS_HOME.
If you can provide the code snippet someone may be able to also provide other suggestions.
Related
I started a local server and want to add some simple commands with python, the server is running with forge 1.12 and a couple of mods.
My idea was it to catch wrong commands and send the right result instead.
An easy test command would be /echo Hello World with the result in the chat Hello World.
To get the command I am using the last line of the latest console log file, which is equal to the current console content. But in the console I cant read wrong commands. So if I run the echo command I get an message in the chat Unknown command. Try /help for a list of commands.
I think there could be two solutions:
Add in any register the command to get it in the console, prevent on this way the server to response and get the command in the console to use it.
Find a config to print also wrong commands in the console.
Thanks for helping
There is no way to 'cancel' commands through API, but there is a trick to effectively cancel commands anyways. You want to be listening to the Forge CommandEvent, modifying the command to another existing command that does nothing (you can create one yourself). This gives you a place to handle all commands (you'll have to filter for unexisting commands, otherwise you'd cancel all commands), and it will prevent the Unknown Command message from showing.
I just started using Airflow to coordinate our ETL pipeline.
I encountered the pipe error when I run a dag.
I've seen a general stackoverflow discussion here.
My case is more on the Airflow side. According to the discussion in that post, the possible root cause is:
The broken pipe error usually occurs if your request is blocked or
takes too long and after request-side timeout, it'll close the
connection and then, when the respond-side (server) tries to write to
the socket, it will throw a pipe broken error.
This might be the real cause in my case, I have a pythonoperator that will start another job outside of Airflow, and that job could be very lengthy (i.e. 10+ hours), I wonder if what is the mechanism in place in Airflow that I can leverage to prevent this error.
Can anyone help?
UPDATE1 20190303-1:
Thanks to #y2k-shubham for the SSHOperator, I am able to use it to set up a SSH connection successfully and am able to run some simple commands on the remote site (indeed the default ssh connection has to be set to localhost because the job is on the localhost) and am able to see the correct result of hostname, pwd.
However, when I attempted to run the actual job, I received same error, again, the error is from the jpipeline ob instead of the Airflow dag/task.
UPDATE2: 20190303-2
I had a successful run (airflow test) with no error, and then followed another failed run (scheduler) with same error from pipeline.
While I'd suggest you keep looking for a more graceful way of trying to achieve what you want, I'm putting up example usage as requested
First you've got to create an SSHHook. This can be done in two ways
The conventional way where you supply all requisite settings like host, user, password (if needed) etc from the client code where you are instantiating the hook. Im hereby citing an example from test_ssh_hook.py, but you must thoroughly go through SSHHook as well as its tests to understand all possible usages
ssh_hook = SSHHook(remote_host="remote_host",
port="port",
username="username",
timeout=10,
key_file="fake.file")
The Airflow way where you put all connection details inside a Connection object that can be managed from UI and only pass it's conn_id to instantiate your hook
ssh_hook = SSHHook(ssh_conn_id="my_ssh_conn_id")
Of course, if your'e relying on SSHOperator, then you can directly pass the ssh_conn_id to operator.
ssh_operator = SSHOperator(ssh_conn_id="my_ssh_conn_id")
Now if your'e planning to have a dedicated task for running a command over SSH, you can use SSHOperator. Again I'm citing an example from test_ssh_operator.py, but go through the sources for a better picture.
task = SSHOperator(task_id="test",
command="echo -n airflow",
dag=self.dag,
timeout=10,
ssh_conn_id="ssh_default")
But then you might want to run a command over SSH as a part of your bigger task. In that case, you don't want an SSHOperator, you can still use just the SSHHook. The get_conn() method of SSHHook provides you an instance of paramiko SSHClient. With this you can run a command using exec_command() call
my_command = "echo airflow"
stdin, stdout, stderr = ssh_client.exec_command(
command=my_command,
get_pty=my_command.startswith("sudo"),
timeout=10)
If you look at SSHOperator's execute() method, it is a rather complicated (but robust) piece of code trying to achieve a very simple thing. For my own usage, I had created some snippets that you might want to look at
For using SSHHook independently of SSHOperator, have a look at ssh_utils.py
For an operator that runs multiple commands over SSH (you can achieve the same thing by using bash's && operator), see MultiCmdSSHOperator
Can anyone show me an example of script that can be run from sqoop2 client in batch mode?
I refered http://sqoop.apache.org/docs/1.99.2/Sqoop5MinutesDemo.html
and it says we can run sqoop2 client in batch mode using the following command
sqoop.sh client /path/to/your/script.sqoop
but that script.sqoop isn't like sqoop1 script, so how should it be?
Batch file is nothing but a list of the same commands you would otherwise type in interactive mode (plus comment lines starting with pound sign).
However! Some commands require manual input, thus cannot be easily fully automated (e.g., 'create link' command). See this thread for details.
I am using salt for last one month. Whenever I run a command say sudo salt '*' test.ping, then the master pings all the minions and the response being the list of all the minions which are up and running. Output looks something like:
{
"minion_1": true
}
{
"minion_2": true
}
{
"minion_3": true
}
In the master's conf file, return type is configured to JSON.
But if I execute an incorrect command through salt master say sudo salt '*' test1.ping, then the master returns something like this
{
"minion_1": "'test1.ping' is not available."
}
{
"minion_2": "'test1.ping' is not available."
}
{
"minion_3": "'test1.ping' is not available."
}
In both the outputs displayed above, the command has given a success exit code on the master's shell/terminal. How do we track which minions were not able to execute the commands. I am not interested in what type of error it is, I just need some or the other way to track the minions which failed to execute the command.
The last solution is to write a parser which will read the complete output and decide for itself. Hope that there is a better solution.
Reasons to despair
I would not rely on Salt's CLI exit code at the moment (version 2014.7.5) - there are still many issues opened to solve this.
Get valid JSON output
There is --static option which fixes JSON output:
If using --out=json, you will probably want --static as well. Without the static option, you will get a JSON string for each minion.
Otherwise the output given by Salt above contains multiple objects (one per minion) which is not a valid JSON (JSON requires single object, array or value per document) and simple way of loading entire output by a standard JSON parser will fail. It is even mentioned in documentation (as of 5188d6c):
Some JSON parsers can guess when an object ends and a new one begins but many can not.
In addition to that, some Salt options (like show_jid) also send strings to STDOUT which mixes it with execution report and invalidates JSON output format. Option --static also solves this problem.
UPDATE: Parser to detect failure in Salt execution
This problem squeezed me so much so I gave quick birth to this Python script # 75e42af with example how it is used # b819961d.
NOTE: This won't address output of arbitrary Salt command (including test.ping above), but issues related to the output of state execution are covered. There is still a solution to test.ping problem above - it can be run from state, then the output can be analysed by the script. See how to call an execution module from within a state or *.sls file in this answer.
Features (details in the code itself):
Handle output from both highstate and orchestrate runners.
Handle output of multiple minions and any number of commands.
Report summary "? of N" and overall result.
Standalone file usable as script and module.
The only limitation is that it requires JSON output (Salt option --out json) simply because it is easy to fix the discussed issues before feeding it to parser.
The above parser will only work for test.ping command.
If multiple commands have to be executed we will have to write a more robust parser.
Can anyone recommend a fairly clean method for determining the process scheduler an app-engine is running on at run-time (NT or UNIX). I need to set a file path that is obviously dependent upon the server the process is being executed on. I understand the GetEnv command can be used, but I don't want to set an environment variable for this particular instance (it does not reside under the PS_FILES) path. I've searched peoplebooks for any kind of built in function or system variable, but was not successful (obviously).
Any suggestions would be appreciated.
Thanks
Okay, I may have asked this question a little too early. I apologize.
It looks like I'll just be able to query the process request table to pull back the server name:
SQLExec("SELECT SERVERNAMERUN FROM PSPRCSRQST WHERE PRCSINSTANCE = :1", &thisProcess, &server);
Evaluate &server
When.......
End-Evaluate;
Exactly :-)
There are a host of Process Request records that can give you the information you need. Glad that you found it.
John