taskscheduleR ERROR: The system cannot find the file specified - r

I tried to add a task scheduler from taskscheduleR on rstudio but it gives this error;
Warning: running command 'schtasks /Delete /TN "script1.R" /F' had status 1
Creating task schedule: schtasks /Create /TN "script1.R" /TR "cmd /c C:/PROGRA~1/MICROS~3/srv~1/R_SERVER/bin/Rscript.exe \"C:/Users/user1/Desktop/script1.R\" >> \"C:/Users/user1/Desktop/script1.log\" 2>&1" /SC ONCE /ST 23:40
how can I solve it?

Might have to do with the locale settings. I faced same issue and I used the task scheduler to set up a job for the future date (01-01-2020). The job set successfully. Then I went to Windows Task Scheduler and adapted the starting date to be as I wanted (e.g. 19-09-2019).

Related

Restart Autosys job when terminated via another Autosys job

I am setting up an Autosys job to restart another job when the main job is terminated.
Insert_job: Job_Restarter
Job_type: cmd
Condition: t(main_job)
Box_name: my_test_box
Permission: gx,get
Command: send -E FORCE_STARTJOB -J main_job
When the main job is terminated, the restart job runs but fails and I get an error code of 1. I know this is a generic error code, but dose anyone have an idea of what I am doing wrong?
Edit:
Did some digging. "Sendevent" is not recognized as a command. Is there another way to restart the job through another job?

Error while running a slurm job through crontab which uses Intel MPI

I am trying to run WRF (real.exe, wrf.exe) through the crontab using compute nodes but compute nodes are not able to run slurm job. I think there is some issue with the MPI library when it's running through the cron environment.
Tried to replicate the terminal path variables while running the crontab job.
The log file generated while running job on terminal and crontab is attached as with_terminal and with_crontab respectively in the link.
https://drive.google.com/drive/folders/1YE9OchSB8alpZSdRl-8uIbBPm6lI-0DQ
Error while running the job from crontab is as follows
#################################
compute-dy-c5n18xlarge-1
#################################
Processes 4
[mpiexec#compute-dy-c5n18xlarge-1] check_exit_codes (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:117): unable to run bstrap_proxy on compute-dy-c5n18xlarge-1 (pid 23070, exit code 65280)
[mpiexec#compute-dy-c5n18xlarge-1] poll_for_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:159): check exit codes error
[mpiexec#compute-dy-c5n18xlarge-1] HYD_dmx_poll_wait_for_proxy_event (../../../../../src/pm/i_hydra/libhydra/demux/hydra_demux_poll.c:212): poll for event error
[mpiexec#compute-dy-c5n18xlarge-1] HYD_bstrap_setup (../../../../../src/pm/i_hydra/libhydra/bstrap/src/intel/i_hydra_bstrap.c:772): error waiting for event
[mpiexec#compute-dy-c5n18xlarge-1] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1938): error setting up the boostrap proxies
Thanks for looking in the issue.

How to make SCHTASKS run command?

I am new to SCHTASKS and cannot get it to run a simple command. What I tried is below. The file c:\downloads\temp_w.txt is not created. What is wrong?
c:\downloads>ver
Microsoft Windows [Version 10.0.17763.503]
c:\downloads>SCHTASKS /Create /SC MINUTE /MO 1 /TN mydir /TR "dir c:\windows > c:\downloads\temp_w.txt"
WARNING: The task name "mydir" already exists. Do you want to replace it (Y/N)? y
y
SUCCESS: The scheduled task "mydir" has successfully been created.
c:\downloads>schtasks /run /tn mydir
SUCCESS: Attempted to run the scheduled task "mydir".
c:\downloads>dir temp_w.txt
Volume in drive C has no label.
Volume Serial Number is ECC7-1C96
Directory of c:\downloads
File Not Found
DIR is an internal command (see: https://ss64.com/nt/syntax-internal.html)
The following should work (untested):
c:\>SCHTASKS /Create /SC MINUTE /MO 1 /TN mydir /TR "cmd /c dir c:\windows > c:\downloads\temp_w.txt"

net.corda.core.serialization.SerializationWhitelist: Error reading configuration file

I downloaded cordapp template (Java) from https://github.com/corda/cordapp-template-java.
Everytime I make chage to the project gradlew deplyNodes fails with below error. However, it automatically gets resolved once I restart my system.
Is there anything, I am missing?
> Configure project :
Gradle now uses separate output directories for each JVM language, but this build assumes a single directory for all classes from a source set. This behaviour has been deprecated and is scheduled to be removed in Gradle 5.0
at build_d668pifueefmtb65xfqnh374z$_run_closure5.doCall(C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build.gradle:83)
The setTestClassesDir(File) method has been deprecated and is scheduled to be removed in Gradle 5.0. Please use the setTestClassesDirs(FileCollection) method instead.
at build_d668pifueefmtb65xfqnh374z$_run_closure5.doCall(C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build.gradle:83)
> Task :deployNodes
Bootstrapping local network in C:\Users\amit.pamecha\Documents\workspace\abcdwork\capital-coin\cordapp-template-java\build\nodes
Node config files found in the root directory - generating node directories
Generating directory for Notary
Generating directory for PartyA
Generating directory for PartyB
Nodes found in the following sub-directories: [Notary, PartyA, PartyB]
Waiting for all nodes to generate their node-info files...
Distributing all node info-files to all nodes
Gathering notary identities
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':deployNodes'.
> net.corda.core.serialization.SerializationWhitelist: Error reading configuration file
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 26s
12 actionable tasks: 4 executed, 8 up-to-date
This is caused by a stale Gradle process. You need to kill this process.
You can use killall java -9 or pkill java on Unix, or wmic process where "name like '%java%'" delete, to kill all Java processes.
Or you can use something like:
lsof -nP +c 15 | grep LISTEN to find processes and ports
ps ax | grep <pid> to confirm the command line of the process
kill -9 <pid>

how to clear failing DAGs using the CLI in airflow

I have some failing DAGs, let's say from 1st-Feb to 20th-Feb. From that date upword, all of them succeeded.
I tried to use the cli (instead of doing it twenty times with the Web UI):
airflow clear -f -t * my_dags.my_dag_id
But I have a weird error:
airflow: error: unrecognized arguments: airflow-webserver.pid airflow.cfg airflow_variables.json my_dags.my_dag_id
EDIT 1:
Like #tobi6 explained it, the * was indeed causing troubles.
Knowing that, I tried this command instead:
airflow clear -u -d -f -t ".*" my_dags.my_dag_id
but it's only returning failed task instances (-f flag). -d and -u flags don't seem to work because taskinstances downstream and upstream the failed ones are ignored (not returned).
EDIT 2:
like #tobi6 suggested, using -s and -e permits to select all DAG runs within a date range. Here is the command:
airflow clear -s "2018-04-01 00:00:00" -e "2018-04-01 00:00:00" my_dags.my_dag_id.
However, adding -f flag to the command above only returns failed task instances. is it possible to select all failed task instances of all failed DAG runs within a date range ?
If you are using an asterik * in the Linux bash, it will automatically expand the content of the directory.
Meaning it will replace the asterik with all files in the current working directory and then execute your command.
This will help to avoid the automatic expansion:
"airflow clear -f -t * my_dags.my_dag_id"
One solution I've found so far is by executing sql(MySQL in my case):
update task_instance t left join dag_run d on d.dag_id = t.dag_id and d.execution_date = t.execution_date
set t.state=null,
d.state='running'
where t.dag_id = '<your_dag_id'
and t.execution_date > '2020-08-07 23:00:00'
and d.state='failed';
It will clear all tasks states on failed dag_runs, as button 'clear' pressed for entire dag run in web UI.
In airflow 2.2.4 the airflow clear command was deprecated.
You could now run:
airflow tasks clear -s <your_start_date> -e <end_date> <dag_id>

Resources