Task not visible in Task Scheduler after using schtasks - windows-task-scheduler

schtasks /create /tn "Report monthly id 700" /tr "some path" /SC ONEVENT /EC Application /MO *[Application/EventID=700]
I get message
SUCCESS: The scheduled task "testReport monthly id 700" has successfully been created.
but I can't find it in Task Scheduler. Any suggestions?

I found an interesting article about the visibility of Scheduled Tasks:
https://michlstechblog.info/blog/windows-run-task-scheduler-task-as-limited-user/
With the mentioned code the Task becomes visible and even executable:
$scheduler = New-Object -ComObject "Schedule.Service"
$scheduler.Connect()
$task = $scheduler.GetFolder("").GetTask("")
$sec = $task.GetSecurityDescriptor(0xF)
$sec = $sec + '(A;;GRGX;;;AU)'
$task.SetSecurityDescriptor($sec, 0)
Maybe it will help someone else

Related

Airflow: Issues with Calling TaskGroup

I am having issues with calling TaskGroups, the error log thinks my Job id is avg_speed_20220502_22c11bdf instead of just avg_speed, and I can't figure out why.
Here's my code:
with DAG(
'debug_bigquery_data_analytics',
catchup=False,
default_args=default_arguments) as dag:
# Note to self: the bucket region and the dataproc cluster should be in the same region
create_cluster = DataprocCreateClusterOperator(
task_id='create_cluster',
...
)
with TaskGroup(group_id='weekday_analytics') as weekday_analytics:
avg_temperature = DummyOperator(task_id='avg_temperature')
avg_tire_pressure = DummyOperator(task_id='avg_tire_pressure')
avg_speed = DataprocSubmitPySparkJobOperator(
task_id='avg_speed',
project_id='...',
main=f'gs://.../.../avg_speed.py',
cluster_name=f'spark-cluster-{{ ds_nodash }}',
region='...',
dataproc_jars=['gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar'],
)
avg_temperature >> avg_tire_pressure >> avg_speed
delete_cluster = DataprocDeleteClusterOperator(
task_id='delete_cluster',
project_id='...',
cluster_name='spark-cluster-{{ ds_nodash }}',
region='...',
trigger_rule='all_done',
)
create_cluster >> weekday_analytics >> delete_cluster
Here's the error message I get:
google.api_core.exceptions.InvalidArgument: 400 Job id 'weekday_analytics.avg_speed_20220502_22c11bdf' must conform to '[a-zA-Z0-9]([a-zA-Z0-9\-\_]{0,98}[a-zA-Z0-9])?' pattern
[2022-05-02, 11:46:11 UTC] {taskinstance.py:1278} INFO - Marking task as FAILED. dag_id=debug_bigquery_data_analytics, task_id=weekday_analytics.avg_speed, execution_date=20220502T184410, start_date=20220502T184610, end_date=20220502T184611
[2022-05-02, 11:46:11 UTC] {standard_task_runner.py:93} ERROR - Failed to execute job 549 for task weekday_analytics.avg_speed (400 Job id 'weekday_analytics.avg_speed_20220502_22c11bdf' must conform to '[a-zA-Z0-9]([a-zA-Z0-9\-\_]{0,98}[a-zA-Z0-9])?' pattern; 18116)
[2022-05-02, 11:46:11 UTC] {local_task_job.py:154} INFO - Task exited with return code 1
[2022-05-02, 11:46:11 UTC] {local_task_job.py:264} INFO - 1 downstream tasks scheduled from follow-on schedule check
In Airflow task identifier is task_id. However when using TaskGroups you can have same task_id in different groups thus tasks defined in task group have identifier of group_id.task_id.
For apache-airflow-providers-google>7.0.0:
The bug has been fixed. It should work now.
For apache-airflow-providers-google<=7.0.0:
You are having issues because DataprocJobBaseOperator has:
:param job_name: The job name used in the DataProc cluster. This name by default
is the task_id appended with the execution data, but can be templated. The
name will always be appended with a random number to avoid name clashes.
The problem is that Airflow adds the . char and Google doesn't accept it thus to fix your issue you must override the default of job_name parameter to a string of your choice. You can set it to be the task_id if you wish.
I opened https://github.com/apache/airflow/issues/23439 to report this bug in the meantime you can follow the suggestion above.

R code runs well on one computer but not on another (via Task Scheduler in RScript.exe)

My issue is: when I run the following code from one laptop in RScript.exe via Task Scheduler, I get the desired output; that is the email is sent. But when I run the same code on another machine in RScript.exe via Task Scheduler, it doesn't run. Another machine (machine 2) is able to send emails (when only the code for email is run), so I think the issue is with the following part.
results <- get_everything(query = q, page = 1, page_size = 2, language = "en", sort_by = "popularity", from = Yest, to = Today)
I am unable to find what is the issue here. Can someone please help me with this?
My code is:
library(readxl)
library(float)
library(tibble)
library(string)
library(data.table)
library(gt)
library(tidyquant)
library(condformat)
library(xtable)
library(plyr)
library(dplyr)
library(newsanchor)
library(blastula)
Today <- Sys.Date()
Yest <- Sys.Date()-1
results <- get_everything(query = "Inflation", page = 1, page_size = 2, language =
"en", sort_by = "popularity", from = Yest, to = Today, api_key =
Sys.getenv("NEWS_API_KEY"))
OP <- results$results_df
OP <- OP[-c(1, 5:9)]
colnames(OP) <- c("News Title", "Description", "URL")
W <- print(xtable(OP), type="html", print.results=FALSE, align = "l")
email1 <-
compose_email(
body = md(
c("<tr>", "<td>", "<table>", "<tr>", "<td>", "<b>", "Losers News", "</b>", W,
"</td>", "</tr>", "</table>","</td>", "<td>")
)
)
email1 %>%
smtp_send(
from = "abc#domain.com",
to = "pqr#domain.com",
subject = "Hello",
credentials = creds_key(
"XYZ"
)
)
Whenever you schedule jobs, consider using a command line shell such as PowerShell or Bash to handle the automation steps, capture, and log errors and messages. Rscript fails on the second machine for some unknown reason which you cannot determine since you do not receive any error messages from console using TaskScheduler.
Therefore, consider PowerShell to run all needed Rscript.exe calls and other commands and capture all errors to date-stamped log file. Below script redirects all console output to a .log file with messages. When Rscript command fails, the log will dump error or any console output (i.e., head, tail) below it. Regularly check logs after scheduled jobs.
PowerShell script (save as .ps1 file)
cd "C:\path\to\scripts"
& {
echo "`nAutomation Start: $(Get-Date -format 'u')"
echo "`nSTEP 1: myscript.R - $(Get-Date -format 'u')"
Rscript myscript.R
# ... ADD ANY OTHER COMMANDS ...
echo "`nCAutomation End: $(Get-Date -format 'u')"
} 3>&1 2>&1 > "C:\path\to\logs\automation_run_$(Get-Date -format 'yyyyMMdd').log"
Command Line (to be used in Task Scheduler)
Powershell.exe -executionpolicy remotesigned -File myscheduler.ps1
Note: Either change directory in TaskScheduler job settings where myscheduler.ps1 resides or run absolute path in -File argument.

Using powershell loop with net.exe command error

I'm trying to use powershell to write a script that calls net.exe's delete on a collection of computers meeting the specific case of having 3 or fewer files open. I'm fairly new at this, obviously, as I'm getting odd errors.
Using the example at Microsoft's blog I made the function below out of net session.
Function Get-ActiveNetSessions
{
# converts the output of the net session cmd into a PSobject
$output = net session | Select-String -Pattern \\
$output | foreach {
$parts = $_ -split "\s+", 4
New-Object -Type PSObject -Property #{
Computer = $parts[0].ToString();
Username = $parts[1];
Opens = $parts[2];
IdleTime = $parts[3];
}
}
}
which does produce a workable object that I can apply logic to.
I can use
$computerList = Get-ActiveNetSessions | Where-Object {$_.Opens -clt 3} | Select-Object {$_.Computer} to pull all computers with less than three opens into a variable, too.
What fails is the loop below
ForEach($computer in $computerList)
{
net session $computer /delete
}
with the error
net : The syntax of this command is:
At line:5 char:5
net session $computer /delete
CategoryInfo :NotSpecified (The syntax of this command is::String) [], RemoteException
FullyQualifiedErrorId : NativeCommandError
NET SESSION
[\\computername] [/DELETE] [/LIST]
Trying to run it with a call of $computer = $computer.ToString() ahead of the execution so it sees a string causes the script to hang without dropping the sessions, forcing me to close and reopen the ISE.
What should I do to get this loop working? Any help is appreciated.
Net session expects a \\ before the server name, it looks like. Have you given that a try?

Time command equivalent in PowerShell

What is the flow of execution of the time command in detail?
I have a user created function in PowerShell, which will compute the time for execution of the command in the following way.
It will open the new PowerShell window.
It will execute the command.
It will close the PowerShell window.
It will get the the different execution times using the GetProcessTimes function function.
Is the "time command" in Unix also calculated in the same way?
The Measure-Command cmdlet is your friend.
PS> Measure-Command -Expression {dir}
You could also get execution time from the command history (last executed command in this example):
$h = Get-History -Count 1
$h.EndExecutionTime - $h.StartExecutionTime
I've been doing this:
Time {npm --version ; node --version}
With this function, which you can put in your $profile file:
function Time([scriptblock]$scriptblock, $name)
{
<#
.SYNOPSIS
Run the given scriptblock, and say how long it took at the end.
.DESCRIPTION
.PARAMETER scriptBlock
A single computer name or an array of computer names. You mayalso provide IP addresses.
.PARAMETER name
Use this for long scriptBlocks to avoid quoting the entire script block in the final output line
.EXAMPLE
time { ls -recurse}
.EXAMPLE
time { ls -recurse} "All the things"
#>
if (!$stopWatch)
{
$script:stopWatch = new-object System.Diagnostics.StopWatch
}
$stopWatch.Reset()
$stopWatch.Start()
. $scriptblock
$stopWatch.Stop()
if ($name -eq $null) {
$name = "$scriptblock"
}
"Execution time: $($stopWatch.ElapsedMilliseconds) ms for $name"
}
Measure-Command works, but it swallows the stdout of the command being run. (Also see Timing a command's execution in PowerShell)
If you need to measure the time taken by something, you can follow this blog entry.
Basically, it suggest to use the .NET StopWatch class:
$sw = [System.Diagnostics.StopWatch]::startNew()
# The code you measure
$sw.Stop()
Write-Host $sw.Elapsed

How to get grunt task name given on command line?

In order to customize my grunt tasks, I need access to the grunt task name given on the command line when starting grunt.
The options is no problem, since its well documented (grunt.options).
It's also well documented how to find out the task name, when running a grunt task.
But I need access to the task name before.
Eg, the user writes
grunt build --target=client
When configuring the grunt job in my Gruntfile.js, I can use
grunt.option('target') to get 'client'.
But how do I get hold of parameter build before the task build starts?
Any guidance is much appreciated!
Your grunt file is basically just a function. Try adding this line to the top:
module.exports = function( grunt ) {
/*==> */ console.log(grunt.option('target'));
/*==> */ console.log(grunt.cli.tasks);
// Add your pre task code here...
Running with grunt build --target=client should give you the output:
client
[ 'build' ]
At that point, you can run any code you need to before your task is run including setting values with new dependencies.
A better way is to use grunt.task.current which has information about the currently running task, including a name property. Within a task, the context (i.e. this) is the same object. So . . .
grunt.registerTask('foo', 'Foobar all the things', function() {
console.log(grunt.task.current.name); // foo
console.log(this.name); // foo
console.log(this === grunt.task.current); // true
});
If build is an alias to other tasks and you just want to know what command was typed that led to the current task execution, I typically use process.argv[2]. If you examine process.argv, you'll see that argv[0] is node (because grunt is a node process), argv[1] is grunt, and argv[2] is the actual grunt task (followed by any params in the remainder of argv).
EDIT:
Example output from console.log(grunt.task.current) on grunt#0.4.5 from within a task (can't have a current task from not a current task).
{
nameArgs: 'server:dev',
name: 'server',
args: [],
flags: {},
async: [Function],
errorCount: [Getter],
requires: [Function],
requiresConfig: [Function],
options: [Function],
target: 'dev',
data: { options: { debugPort: 5858, cwd: 'server' } },
files: [],
filesSrc: [Getter]
}
You can use grunt.util.hooker.hook for this.
Example (part of Gruntfile.coffee):
grunt.util.hooker.hook grunt.task, (opt) ->
if grunt.task.current and grunt.task.current.nameArgs
console.log "Task to run: " + grunt.task.current.nameArgs
CMD:
C:\some_dir>grunt concat --cmp my_cmp
Task to run: concat
Running "concat:coffee" (concat) task
Task to run: concat:coffee
File "core.coffee" created.
Done, without errors.
There is also a hack that I've used to prevent certain task execution:
grunt.util.hooker.hook grunt.task, (opt) ->
if grunt.task.current and grunt.task.current.nameArgs
console.log "Task to run: " + grunt.task.current.nameArgs
if grunt.task.current.nameArgs is "<some task you don't want user to run>"
console.log "Ooooh, not <doing smth> today :("
exit() # Not valid. Don't know how to exit :), but will stop grunt anyway
CMD, when allowed:
C:\some_dir>grunt concat:coffee --cmp my_cmp
Running "concat:coffee" (concat) task
Task to run: concat:coffee
File "core.coffee" created.
Done, without errors.
CMD, when prevented:
C:\some_dir>grunt concat:coffee --cmp my_cmp
Running "concat:coffee" (concat) task
Task to run: concat:coffee
Ooooh, not concating today :(
Warning: exit is not defined Use --force to continue.
Aborted due to warnings.

Resources