Time command equivalent in PowerShell - unix

What is the flow of execution of the time command in detail?
I have a user created function in PowerShell, which will compute the time for execution of the command in the following way.
It will open the new PowerShell window.
It will execute the command.
It will close the PowerShell window.
It will get the the different execution times using the GetProcessTimes function function.
Is the "time command" in Unix also calculated in the same way?

The Measure-Command cmdlet is your friend.
PS> Measure-Command -Expression {dir}
You could also get execution time from the command history (last executed command in this example):
$h = Get-History -Count 1
$h.EndExecutionTime - $h.StartExecutionTime

I've been doing this:
Time {npm --version ; node --version}
With this function, which you can put in your $profile file:
function Time([scriptblock]$scriptblock, $name)
{
<#
.SYNOPSIS
Run the given scriptblock, and say how long it took at the end.
.DESCRIPTION
.PARAMETER scriptBlock
A single computer name or an array of computer names. You mayalso provide IP addresses.
.PARAMETER name
Use this for long scriptBlocks to avoid quoting the entire script block in the final output line
.EXAMPLE
time { ls -recurse}
.EXAMPLE
time { ls -recurse} "All the things"
#>
if (!$stopWatch)
{
$script:stopWatch = new-object System.Diagnostics.StopWatch
}
$stopWatch.Reset()
$stopWatch.Start()
. $scriptblock
$stopWatch.Stop()
if ($name -eq $null) {
$name = "$scriptblock"
}
"Execution time: $($stopWatch.ElapsedMilliseconds) ms for $name"
}
Measure-Command works, but it swallows the stdout of the command being run. (Also see Timing a command's execution in PowerShell)

If you need to measure the time taken by something, you can follow this blog entry.
Basically, it suggest to use the .NET StopWatch class:
$sw = [System.Diagnostics.StopWatch]::startNew()
# The code you measure
$sw.Stop()
Write-Host $sw.Elapsed

Related

path not being detected by Nextflow

i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}

zsh: Do I need to close file descriptors?

I use the following code to both output something to stdout, and pipe it to a program:
function example() {
local fd1
{
exec {fd1}>&1
{ echo hi >&$fd1 } | true
} always { exec {fd1}>&- }
}
I am wondering if I can safely drop always { exec {fd1}>&- }. fd1 goes out of scope after the function finishes anyways.
You need to keep always { exec {fd1}>&- }. If you get rid of that, the variable containing the file descriptor will go out of scope, but the file descriptor won't be closed, resulting in leaking it. You can see this by doing ls -l /proc/$$/fd before and after running your function without that line. Each run of the function will permanently add another FD to that list. Eventually, you'll run out of file descriptors and won't be able to open any new ones, which will break things.

Signal being forwarded to children for the symfony process component

I'm trying to write a small script that will manage a series of background processes using the symfony component Process (http://symfony.com/doc/current/components/process.html).
For this to work correctly i would like to handle signals sent to the main process, mainly SIGINT (ctrl + c).
When the main process gets this signal, it should stop starting new processes, wait for all current processes to exit and then exit itself.
I successfully catch the signal in the main process but the problem is that the child-processes gets the signal too and exits immediately.
Is there any way of changing this behavior or interrupting this signal?
This is my example script to demonstrate the behavior.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
use Symfony\Component\Process\Process;
$process = new Process("sleep 10");
$process->start();
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
Running this script, and sending the signal (pressing ctrl + c) will immediately stop the parent and child processes).
If i replace the while-loop with the isRunning call and the sleep with a call to the wait-method on the process i get an RuntimeException saying: The process has been signaled with signal "2".
If i take a more manual approach and execute the child process with phps build in exec, i get the behavior i want.
#!/usr/bin/env php
<?php
require_once __DIR__ . "/vendor/autoload.php";
exec(sprintf("%s > %s 2>&1 & echo $! >> %s", "sleep 10", "/dev/null", "/tmp/testscript.pid"));
$exitHandler = function ($signo) {
print "Got signal {$signo}\n";
$pid = file_get_contents("/tmp/testscript.pid");
while (isRunning($pid)) {
usleep(10000);
}
exit;
};
pcntl_signal(SIGINT, $exitHandler);
while (true) {
pcntl_signal_dispatch();
sleep(1);
}
function isRunning($pid){
try{
$result = shell_exec(sprintf("ps %d", $pid));
if( count(preg_split("/\n/", $result)) > 2){
return true;
}
}catch(Exception $e){}
return false;
}
In this case, when i send the signal, the main process waits for it's child to finish before exiting.
Is there any way to get the behavior in the symfony process component?
It's not the behavior of Symfony's Process, but behavior of ctrl+c in UNIX terminal. When you press ctrl+c in terminal signal is sent to process group (parent and all child processes).
Manual approach works because sleep isn't child process. When you want to use Symfony's component you can change child's process group with posix_setpgid:
$otherGroup = posix_getpgid(posix_getppid());
posix_setpgid($process->getPid(), $otherGroup);
Then signal won't be sent to $process. That's the only working solution I found when I recently tackled with similar problem.
Research
Sending signals to process group
Child process is created in Symfony example. You can check it in terminal.
# find pid of your script
ps -aux | grep "myscript.php"
# show process tree
pstree -a pid
# you will see that sleep is child process
php myscript.php
└─sh -c sleep 20
└─sleep 20
Signal sent to process group is nicely visible when you print information about process in $exitHandler:
$exitHandler = function ($signo) use ($process) {
print "Got signal {$signo}\n";
while ($process->isRunning()) {
usleep(10000);
}
$isSignaled = $process->hasBeenSignaled() ? 'YES' : 'NO';
echo "Signaled? {$isSignaled}\n";
echo "Exit code: {$process->getExitCode()}\n\n";
exit;
};
When you press ctrl+c or kill process group:
# kill process group (like in ctrl+c)
kill -SIGINT -pid
# $exitHandler's output
Got signal 2
Signaled? YES
Exit code: 130
When signal is send only to parent process then you'll get expected behavior:
# kill only main process
kill -SIGINT pid
# $exitHandler's output
Got signal 2
Signaled? NO
Exit code: 0
Now the solution is obvious. Don't create child process or change processs group, so signal is sent only to parent process.
Disadvantages of changing process group
Be aware of consequences when real child process isn't used. $process won't be terminated when parent process is killed with SIGKILL. If $process is long-running script then you could have multiple running instances after restarting parent process. It's good idea to check running processes before starting $process.

Using powershell loop with net.exe command error

I'm trying to use powershell to write a script that calls net.exe's delete on a collection of computers meeting the specific case of having 3 or fewer files open. I'm fairly new at this, obviously, as I'm getting odd errors.
Using the example at Microsoft's blog I made the function below out of net session.
Function Get-ActiveNetSessions
{
# converts the output of the net session cmd into a PSobject
$output = net session | Select-String -Pattern \\
$output | foreach {
$parts = $_ -split "\s+", 4
New-Object -Type PSObject -Property #{
Computer = $parts[0].ToString();
Username = $parts[1];
Opens = $parts[2];
IdleTime = $parts[3];
}
}
}
which does produce a workable object that I can apply logic to.
I can use
$computerList = Get-ActiveNetSessions | Where-Object {$_.Opens -clt 3} | Select-Object {$_.Computer} to pull all computers with less than three opens into a variable, too.
What fails is the loop below
ForEach($computer in $computerList)
{
net session $computer /delete
}
with the error
net : The syntax of this command is:
At line:5 char:5
net session $computer /delete
CategoryInfo :NotSpecified (The syntax of this command is::String) [], RemoteException
FullyQualifiedErrorId : NativeCommandError
NET SESSION
[\\computername] [/DELETE] [/LIST]
Trying to run it with a call of $computer = $computer.ToString() ahead of the execution so it sees a string causes the script to hang without dropping the sessions, forcing me to close and reopen the ISE.
What should I do to get this loop working? Any help is appreciated.
Net session expects a \\ before the server name, it looks like. Have you given that a try?

zsh: update prompt with current time when a command is started

I have a zsh prompt I rather like: it evaluates the current time in precmd and displays that on the right side of the prompt:
[Floatie:~] ^_^
cbowns% [9:28:31 on 2012-10-29]
However, this isn't exactly what I want: as you can see below, this time is actually the time the previous command exited, not the time the command was started:
[Floatie:~] ^_^
cbowns% date [9:28:26 on 2012-10-29]
Mon Oct 29 09:28:31 PDT 2012
[Floatie:~] ^_^
cbowns% date [9:28:31 on 2012-10-29]
Mon Oct 29 09:28:37 PDT 2012
[Floatie:~] ^_^
cbowns% [9:28:37 on 2012-10-29]
Is there a hook in zsh to run a command just before the shell starts a new command so I can update the prompt timestamp then? (I saw Constantly updated clock in zsh prompt?, but I don't need it constantly updated, just updated when I hit enter.)
(The ^_^ is based on the previous command's return code. It shows ;_; in red when there's a nonzero exit status.)
This is in fact possible without resorting to strange hacks. I've got this in my .zshrc
RPROMPT='[%D{%L:%M:%S %p}]'
TMOUT=1
TRAPALRM() {
zle reset-prompt
}
The TRAPALRM function gets called every TMOUT seconds (in this case 1), and here it performs a prompt refresh, and does so until a command starts execution (and it doesn't interfere with anything you type on the prompt before hitting enter). I know you don't need it constantly refreshed but it still gets the job done without needing a line for itself!
Source: http://www.zsh.org/mla/users/2007/msg00944.html (It's from 2007!)
I had a struggle to make this:
It displays the date on the right side when the command has been executed.
It does not overwrite the command shown.
Warning: it may overwrite the current RPROMPT.
strlen () {
FOO=$1
local zero='%([BSUbfksu]|([FB]|){*})'
LEN=${#${(S%%)FOO//$~zero/}}
echo $LEN
}
# show right prompt with date ONLY when command is executed
preexec () {
DATE=$( date +"[%H:%M:%S]" )
local len_right=$( strlen "$DATE" )
len_right=$(( $len_right+1 ))
local right_start=$(($COLUMNS - $len_right))
local len_cmd=$( strlen "$#" )
local len_prompt=$(strlen "$PROMPT" )
local len_left=$(($len_cmd+$len_prompt))
RDATE="\033[${right_start}C ${DATE}"
if [ $len_left -lt $right_start ]; then
# command does not overwrite right prompt
# ok to move up one line
echo -e "\033[1A${RDATE}"
else
echo -e "${RDATE}"
fi
}
Sources:
http://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x361.html
https://stackoverflow.com/a/10564427/238913
You can remap the Return key to reset the prompt before accepting the line:
reset-prompt-and-accept-line() {
zle reset-prompt
zle accept-line
}
zle -N reset-prompt-and-accept-line
bindkey '^m' reset-prompt-and-accept-line
zsh will run the preexec function just before executing a line. It would be simple to have that output the current time, a simple version would be just:
preexec() { date }
Modifying an existing prompt would be much more challenging.
Building off #vitaŭt-bajaryn's cool ZSH style answer:
I think overriding the accept-line function is probably the most idiomatic zsh solution:
function _reset-prompt-and-accept-line {
zle reset-prompt
zle .accept-line # Note the . meaning the built-in accept-line.
}
zle -N accept-line _reset-prompt-and-accept-line
You can use ANSI escape sequences to write over the previous line, like this:
preexec () {
DATE=`date +"%H:%M:%S on %Y-%m-%d"`
C=$(($COLUMNS-24))
echo -e "\033[1A\033[${C}C ${DATE} "
}

Categories

Resources