k6 custom parameter on run line - k6

How can I add a --testenv test12 custom parameter to my tests:
k6 --out cvs=results.csv run --vus 60 --duration 2m --testenv test12 ./mass-login.js
In k6s default function the variable is defined:
export default function () {
:
//const testenvs = ['test12', 'test1', 'test4', 'qa-04', 'prod'];
const testenvs = ['qa-04'];
Current hack is to have different js files, except for 1 line fully redundant.

You can set an environment variable:
k6 -e testenv=test12 ./your_script.js
… and then read it in your test:
const testenv = __ENV.testenv;

Related

How to create shard&index in airflow mongohook?

I want to run mongo command with mongohook of airflow. How can I do it?
sh.shardCollection(db_name +, { _id: "hashed" }, false, { numInitialChunks: 128 });
db.collection.createIndex({ "field": 1 }, { field: true });
The pymongo client which the Mongohook provided in Airflow uses doesn't support the sh.shardCollection command in your script.
Though the createIndex collection method is supported in the pymongo client.
I recommend anyway to install the mongosh CLI binary and bake it into your container image for your workers.
You can write your shell command to a script such as /dags/templates/mongo-admin-create-index.js or some other location that it can be found.
Then can implement a custom operator using the SubprocessHook to run mongosh CLI command such as:
mongosh -f {mongosh_script} {db_address}
This custom operator would be along these lines
from airflow.compat.functools import cached_property
from airflow.hooks.subprocess import SubprocessHook
from airflow.providers.mongo.hooks import MongoHook
class MongoshScriptOperator(BaseOperator):
template_fields: Sequence[str] = ('mongosh_script')
def __init__(
self,
*,
mongosh_script: str,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.mongosh_script = mongosh_script
#cached_property
def subprocess_hook(self):
"""Returns hook for running the shell command"""
return SubprocessHook()
def execute(self):
"""Executes a mongosh script"""
mh = MongoHook(self.conn_id)
self.subprocess_hook.run_command(
command=['mongosh', '-f', self.mongosh_script, mh.uri],
)
When creating the DagNode, you can pass the location of the script to your custom operator.

How to re-create an equivalent to linux bash statement in Deno with Deno.Run

How to re-create an equivalent to following Linux bash statement in Deno?
docker compose exec container_name -uroot -ppass db_name < ./dbDump.sql
I have tried the following:
const encoder = new TextEncoder
const p = await Deno.run({
cmd: [
'docker',
'compose',
'exec',
'container_name',
'mysql',
'-uroot',
'-ppass',
'db_name',
],
stdout: 'piped',
stderr: 'piped',
stdin: "piped",
})
await p.stdin.write(encoder.encode(await Deno.readTextFile('./dbDump.sql')))
await p.stdin.close()
await p.close()
But for some reason whenever I do it this way I get an error ERROR 1064 (42000) at line 145: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version which does not happen when I perform the exact same command in the bash.
Could someone please explain me how it has to be done properly?
Without a sample input file, it's impossible to be certain of your exact issue.
Given the context though, I suspect that your input file is too large for a single proc.stdin.write() call. Try using the writeAll() function to make sure the full payload goes through:
import { writeAll } from "https://deno.land/std#0.119.0/streams/conversion.ts";
await writeAll(proc.stdin, await Deno.readFile(sqlFilePath));
To show what this fixes, here's a Deno program pipe-to-wc.ts which passes its input to the Linux 'word count' utility (in character-counting mode):
#!/usr/bin/env -S deno run --allow-read=/dev/stdin --allow-run=wc
const proc = await Deno.run({
cmd: ['wc', '-c'],
stdin: 'piped',
});
await proc.stdin.write(await Deno.readFile('/dev/stdin'));
proc.stdin.close();
await proc.status();
If we use this program with a small input, the count lines up:
# use the shebang to make the following commands easier
$ chmod +x pipe-to-wc.ts
$ dd if=/dev/zero bs=1024 count=1 | ./pipe-to-wc.ts
1+0 records in
1+0 records out
1024 bytes (1.0 kB, 1.0 KiB) copied, 0.000116906 s, 8.8 MB/s
1024
But as soon as the input is big, only 65k bytes are going through!
$ dd if=/dev/zero bs=1024 count=100 | ./pipe-to-wc.ts
100+0 records in
100+0 records out
102400 bytes (102 kB, 100 KiB) copied, 0.0424347 s, 2.4 MB/s
65536
To fix this issue, let's replace the write() call with writeAll():
#!/usr/bin/env -S deno run --allow-read=/dev/stdin --allow-run=wc
const proc = await Deno.run({
cmd: ['wc', '-c'],
stdin: 'piped',
});
import { writeAll } from "https://deno.land/std#0.119.0/streams/conversion.ts";
await writeAll(proc.stdin, await Deno.readFile('/dev/stdin'));
proc.stdin.close();
await proc.status();
Now all the bytes are getting passed through on big inputs :D
$ dd if=/dev/zero bs=1024 count=1000 | ./pipe-to-wc.ts
1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB, 1000 KiB) copied, 0.0854184 s, 12.0 MB/s
1024000
Note that this will still fail on huge inputs once they exceed the amount of memory available to your program. The writeAll() solution should be fine up to 100 megabytes or so. After that point you'd probably want to switch to a streaming solution.
First, a couple of notes:
Deno currently doesn't offer a way to create a detached subprocess. (You didn't mention this, but it seems potentially relevant to your scenario given typical docker compose usage) See denoland/deno#5501.
Deno's subprocess API is currently being reworked. See denoland/deno#11016.
Second, here are links to the relevant docs:
docker-compose exec
CLI APIs > Deno.run
Manual > Creating a subprocess (Deno v1.17.0)
Now, here's a commented breakdown of how to create a subprocess (according to the current API) using your scenario as an example:
module.ts:
const dbUser = 'actual_database_username';
const dbPass = 'actual_database_password'
const dbName = 'actual_database_name';
const dockerExecProcCmd = ['mysql', '-u', dbUser, '-p', dbPass, dbName];
const serviceName = 'actual_compose_service_name';
// Build the run command
const cmd = ['docker', 'compose', 'exec', '-T', serviceName, ...dockerExecProcCmd];
/**
* Create the subprocess
*
* For now, leave `stderr` and `stdout` undefined so they'll print
* to your console while you are debugging. Later, you can pipe (capture) them
* and handle them in your program
*/
const p = Deno.run({
cmd,
stdin: 'piped',
// stderr: 'piped',
// stdout: 'piped',
});
/**
* If you use a relative path, this will be relative to `Deno.cwd`
* at the time the subprocess is created
*
* https://doc.deno.land/deno/stable/~/Deno.cwd
*/
const sqlFilePath = './dbDump.sql';
// Write contents of SQL script to stdin
await p.stdin.write(await Deno.readFile(sqlFilePath));
/**
* Close stdin
*
* I don't know how `mysql` handles `stdin`, but if it needs the EOT sent by
* closing and you don't need to write to `stdin` any more, then this is correct
*/
p.stdin.close();
// Wait for the process to finish (either OK or NOK)
const {code} = await p.status();
console.log({'docker-compose exit status code': code});
// Not strictly necessary, but better to be explicit
p.close();

Rack::Attack test case: can't change ENV variable value using blocklist

Could anyone share some blocklist test cases with an ENV variable as well, I found that in the spec file, we can't change the env variable in the rails middleware.
If we set the env variable in the spec file.
stub_const('ENV', 'RACK_ATTACK_BLOCK_IP_LIST' => '1.1.1.1')
in the application.yml file, there is a setting:
RACK_ATTACK_BLOCK_IP_LIST: '2.2.2.2'
If we run the test case and monitor env values in rake_attack.rb file,we can only get the new env variable value '1.1.1.1' in side safelist block, for example :
blocklist('block_ip_list') do |req|
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
block_ip_list.include?(req.ip)
end
if we move the safe_ip_list out of the safelist block, it will still '2.2.2.2'
block_ip_list = ENV['RACK_ATTACK_BLOCK_IP_LIST'].try(:split, /,\s*/) || []
blocklist('block_ip_list') do |req|
block_ip_list.include?(req.ip)
end

What is the Deno equivalent of process.argv in Node.js?

When working with NodeJS, I can pass the arguments to a Node script like this:
$ node node-server.js arg1 arg2=arg2-val arg3
And can get the arguments like so:
// print process.argv
process.argv.forEach(function (val, index, array) {
console.log(index + ': ' + val);
});
//Output
0: node
1: /Users/umar/work/node/node-server.js
2: arg1
3: arg2=arg2-val
4: arg3
How to get the command-line arguments in Deno?
Some experts suggested me to solve the problem by answers to the question
Deno executable path ~ process.argv[0]:
Deno.execPath()
File URL of executed script ~ process.argv[1]:
Deno.mainModule
You can use path.fromFileUrl for conversions of URL to path string:
import { fromFileUrl } from "https://deno.land/std#0.55.0/path/mod.ts";
const modPath = fromFileUrl(import.meta.url)
Command-line arguments ~ process.argv.slice(2):
Deno.args
Example
deno run --allow-read test.ts -foo -bar=baz 42
Sample output (Windows):
Deno.execPath(): <scoop path>\apps\deno\current\deno.exe
import.meta.url: file:///C:/path/to/project/test.ts
as path: C:\path\to\project\test.ts
Deno.args: [ "-foo", "-bar=baz", "42" ]
To get your script’s CLI arguments in Deno, just use Deno.args:
> deno run ./code.ts foo bar
console.log(Deno.args); // ['foo', 'bar']
If you need something identical to Node's process.argv for compatibility reasons, use the official 'node' shim:
import process from 'https://deno.land/std#0.120.0/node/process.ts'
console.log(process.argv); // ['/path/to/deno', '/path/to/code.ts', 'foo', 'bar']
For illustrative purposes, if you wanted to manually construct a process.argv-style array (without using the official 'node' shim) you could do this:
import { fromFileUrl } from "https://deno.land/std#0.120.0/path/mod.ts";
const argv = [
Deno.execPath(),
fromFileUrl(Deno.mainModule),
...Deno.args,
]
console.log(argv); // ['/path/to/deno', '/path/to/code.ts', 'foo', 'bar']

call shell script from node.js

How can I generate a script in Node.js and pipe to the shell?
E.g. I can create this file, e.g. hello.R, make it executable chmod +x hello.R and run it from the command line, ./hello.R:
#!/usr/bin/Rscript
hello <- function( name ) { return (sprintf( "Hello, %s", name ); })
cat(hello("World"));
What I'd like to do is to do the equivalent from Node. Specifically generate a more complex R script in memory (e.g. as a string using templating, etc.), execute it (using exec or spawn?), and read stdout.
But I can't quite figure out how to pipe a script to R. I tried this (among other things):
var rscript = [
hello <- function( name ) { return (sprintf( "Hello, %s", name ); })
cat(hello("World"));
].join('\n');
var exec = require('child_process').exec;
exec(rscript, { shell: '/usr/bin/R'}, function(err, stdout, stderr) {
if (err) throw(err);
console.log(stdout);
});
However, this fails as it seems neither /usr/bin/R nor /usr/bin/Rscript understand the -c switch:
Check the nodejs docs of child_process. You should be able to spawn an Rscript or R command just as you would do on the terminal, and send your commands over child.stdin.
var c = require('child_process');
var r = c.spawn("R","");
r.stdin.write(rscript);
/* now you should be able to read the results from r.stdout a/o r.stderr */

Resources