What is the Deno equivalent of process.argv in Node.js? - deno

When working with NodeJS, I can pass the arguments to a Node script like this:
$ node node-server.js arg1 arg2=arg2-val arg3
And can get the arguments like so:
// print process.argv
process.argv.forEach(function (val, index, array) {
console.log(index + ': ' + val);
});
//Output
0: node
1: /Users/umar/work/node/node-server.js
2: arg1
3: arg2=arg2-val
4: arg3
How to get the command-line arguments in Deno?
Some experts suggested me to solve the problem by answers to the question

Deno executable path ~ process.argv[0]:
Deno.execPath()
File URL of executed script ~ process.argv[1]:
Deno.mainModule
You can use path.fromFileUrl for conversions of URL to path string:
import { fromFileUrl } from "https://deno.land/std#0.55.0/path/mod.ts";
const modPath = fromFileUrl(import.meta.url)
Command-line arguments ~ process.argv.slice(2):
Deno.args
Example
deno run --allow-read test.ts -foo -bar=baz 42
Sample output (Windows):
Deno.execPath(): <scoop path>\apps\deno\current\deno.exe
import.meta.url: file:///C:/path/to/project/test.ts
as path: C:\path\to\project\test.ts
Deno.args: [ "-foo", "-bar=baz", "42" ]

To get your script’s CLI arguments in Deno, just use Deno.args:
> deno run ./code.ts foo bar
console.log(Deno.args); // ['foo', 'bar']
If you need something identical to Node's process.argv for compatibility reasons, use the official 'node' shim:
import process from 'https://deno.land/std#0.120.0/node/process.ts'
console.log(process.argv); // ['/path/to/deno', '/path/to/code.ts', 'foo', 'bar']
For illustrative purposes, if you wanted to manually construct a process.argv-style array (without using the official 'node' shim) you could do this:
import { fromFileUrl } from "https://deno.land/std#0.120.0/path/mod.ts";
const argv = [
Deno.execPath(),
fromFileUrl(Deno.mainModule),
...Deno.args,
]
console.log(argv); // ['/path/to/deno', '/path/to/code.ts', 'foo', 'bar']

Related

path not being detected by Nextflow

i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}

k6 custom parameter on run line

How can I add a --testenv test12 custom parameter to my tests:
k6 --out cvs=results.csv run --vus 60 --duration 2m --testenv test12 ./mass-login.js
In k6s default function the variable is defined:
export default function () {
:
//const testenvs = ['test12', 'test1', 'test4', 'qa-04', 'prod'];
const testenvs = ['qa-04'];
Current hack is to have different js files, except for 1 line fully redundant.
You can set an environment variable:
k6 -e testenv=test12 ./your_script.js
… and then read it in your test:
const testenv = __ENV.testenv;

How to create shard&index in airflow mongohook?

I want to run mongo command with mongohook of airflow. How can I do it?
sh.shardCollection(db_name +, { _id: "hashed" }, false, { numInitialChunks: 128 });
db.collection.createIndex({ "field": 1 }, { field: true });
The pymongo client which the Mongohook provided in Airflow uses doesn't support the sh.shardCollection command in your script.
Though the createIndex collection method is supported in the pymongo client.
I recommend anyway to install the mongosh CLI binary and bake it into your container image for your workers.
You can write your shell command to a script such as /dags/templates/mongo-admin-create-index.js or some other location that it can be found.
Then can implement a custom operator using the SubprocessHook to run mongosh CLI command such as:
mongosh -f {mongosh_script} {db_address}
This custom operator would be along these lines
from airflow.compat.functools import cached_property
from airflow.hooks.subprocess import SubprocessHook
from airflow.providers.mongo.hooks import MongoHook
class MongoshScriptOperator(BaseOperator):
template_fields: Sequence[str] = ('mongosh_script')
def __init__(
self,
*,
mongosh_script: str,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.mongosh_script = mongosh_script
#cached_property
def subprocess_hook(self):
"""Returns hook for running the shell command"""
return SubprocessHook()
def execute(self):
"""Executes a mongosh script"""
mh = MongoHook(self.conn_id)
self.subprocess_hook.run_command(
command=['mongosh', '-f', self.mongosh_script, mh.uri],
)
When creating the DagNode, you can pass the location of the script to your custom operator.

bus error on usage of rusqlite with spatialite extension

I'm seeing a bus error on cargo run when attempting to load the spatialite extension with rusqlite:
Finished dev [unoptimized + debuginfo] target(s) in 1.19s
Running `target/debug/rust-spatialite-example`
[1] 33253 bus error cargo run --verbose
My suspicion is that there's a mismatch of sqlite version and spatialite and that they need to be built together rather than using the bundled feature of rusqlite, though it seems like that'd result in a different error?
Here's how things are set up:
Cargo.toml
[package]
name = "rust-spatialite-example"
version = "0.0.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rusqlite = { version = "0.28.0", features = ["load_extension", "bundled"] }
init.sql
CREATE TABLE place (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
SELECT AddGeometryColumn('place', 'geom', 4326, 'POINT', 'XY', 0);
SELECT CreateSpatialIndex('place', 'geom');
main.rs
use rusqlite::{Connection, Result, LoadExtensionGuard};
#[derive(Debug)]
struct Place {
id: i32,
name: String,
geom: String,
}
fn load_spatialite(conn: &Connection) -> Result<()> {
unsafe {
let _guard = LoadExtensionGuard::new(conn)?;
conn.load_extension("/opt/homebrew/Cellar/libspatialite/5.0.1_2/lib/mod_spatialite", None)
}
}
fn main() -> Result<()> {
let conn = Connection::open("./geo.db")?;
load_spatialite(&conn)?;
// ... sql statements that aren't executed
Ok(())
}
Running:
cat init.sql | spatialite geo.db
cargo run
The mod_spatialite path is correct (there's an expected SqliteFailure error when that path is wrong). I tried explicitly setting sqlite3_modspatialite_init as the entry point and the behavior stayed the same.

call shell script from node.js

How can I generate a script in Node.js and pipe to the shell?
E.g. I can create this file, e.g. hello.R, make it executable chmod +x hello.R and run it from the command line, ./hello.R:
#!/usr/bin/Rscript
hello <- function( name ) { return (sprintf( "Hello, %s", name ); })
cat(hello("World"));
What I'd like to do is to do the equivalent from Node. Specifically generate a more complex R script in memory (e.g. as a string using templating, etc.), execute it (using exec or spawn?), and read stdout.
But I can't quite figure out how to pipe a script to R. I tried this (among other things):
var rscript = [
hello <- function( name ) { return (sprintf( "Hello, %s", name ); })
cat(hello("World"));
].join('\n');
var exec = require('child_process').exec;
exec(rscript, { shell: '/usr/bin/R'}, function(err, stdout, stderr) {
if (err) throw(err);
console.log(stdout);
});
However, this fails as it seems neither /usr/bin/R nor /usr/bin/Rscript understand the -c switch:
Check the nodejs docs of child_process. You should be able to spawn an Rscript or R command just as you would do on the terminal, and send your commands over child.stdin.
var c = require('child_process');
var r = c.spawn("R","");
r.stdin.write(rscript);
/* now you should be able to read the results from r.stdout a/o r.stderr */

Resources