locust task method can't get the global value when "on_test_start" exist - global-variables

I have a task in locust script,before the task running with hundreds users,I wangt another operation can change the var "user_var", so that "user_var"'s new value can be used when the task running.
but unfortunately, when i'm running the script, the result is different。
"user_var" in on_test_start has been changed, in the task,its value is still zero.
I print the var id,it's different. so what happened,can somebody tell me? thanks
code as follow
base_url="http://baidu.com"
user_var = 0
print("init var id:{}".format(id(user_var)))
#events.test_start.add_listener
def on_test_start(environment,**kwargs):
global user_var
user_var = 1
print("method var id:{}".format(id(user_var)))
print("user_var:{}".format(user_var))
class MyService(HttpUser):
wait_time=between(1,2)
#task()
def points_acquire(self):
print("class var id:{}".format(id(user_var)))
print("user_var:{}".format(user_var))
if __name__=='__main__':
run_script=os.path.basename(__file__)
master_cmd='start locust -f {} --host={} --master '.format(run_script,base_url)
worker_cmd=' && start locust -f {} --worker'.format(run_script)
total_cmd=master_cmd+worker_cmd*1
os.system(total_cmd)

Per definiton of the event:
"""
Fired when a new load test is started. It's not fired again if the number of
users change during a test. When running locust distributed the event is only fired
on the master node and not on each worker node.
"""
https://docs.locust.io/en/1.4.0/writing-a-locustfile.html#test-start-and-test-stop
So you are initiating variable as 0 and the event doesnt fire in worker nodes so it stays 0,in master node the event fires and changes the value.
Edit: It seems the logic behind this has changed since last I checked it might work if you update your locust version

Related

How to call a function again inside a function in powershell

I'm trying to implement pooling function for datamovement operation in dynamic 365.
I have a PowerShell script which is having a function called Test(pera1,pera2,pera3..) when we call this function it will take around 1hr+ time to complete the event. Now I have to call another task based on this Test() function result. If my condition is match then will call another task otherwise I have to put Start-Sleep -Seconds 3600 to complete the event.
function EnvOperationPooling($pera1, $pera2, $pera3)
{
// here API call code
// API result
if($apiResponse.DeploymentState-eq 'Inprogress')
{
Start-Sleep -Seconds 3600
EnvOperationPooling -proj1 $pera1 -proj2 $pera2 -proj2 $pera2
Write-Host "##vso[task.setvariable variable=DeploymentState;isOutput=true]$($apiResponse.DeploymentState)"
}
else
{
Write-Host "##vso[task.setvariable variable=DeploymentState;isOutput=true]$($apiResponse.DeploymentState)"
}
}
how i can make it recursive any suggetion to make above code better ..?
For recursive operation & time saving of your process you need use background jobs of PowerShell.
if you start a background job, the command prompt returns immediately, even if the job takes an extended time to complete. You can continue to work in the session without interruption while the job runs. Refer
Start-Job -ScriptBlock {Get-Process}
Refer for more here

Limit number of instances of Symfony Command

I have a command in my Symfony app launched by Cron. I want to be able to limit the number of instances executed at the same time on my server, let's say 4 instances. I don't have any clue on how to do this. I found how to lock the command to launch the command only one time and wait for it to finish, but I don't know how to launch more than one and limit the number of instances anyway.
Do you have an idea ?
What you are looking for is a semaphore.
There is a LockComponent currently scheduled for 3.4 (was pulled from 3.3). It is a major improvement over the LockHandler in the FilesystemComponent.
In a pinch, you can probably pool a fixed number of locks from the LockHandler. I don't recommend it, because it uses flock on the filesystem. This limits the lock to a single server. Additionally, flock may be limited to the process scope on some systems.
<?php
use Symfony\Component\Filesystem\LockHandler;
define('LOCK_ID', 'some-identifier');
define('LOCK_MAX', 5);
$lockPool = [];
for ($i = 0; $i <= LOCK_MAX;) {
$lockHandle = sprintf('%s-%s.lock', LOCK_ID, ++$i);
$lockPool[$i] = new LockHandler($lockHandle);
}
$activeLock = null;
$lockTimeout = 60 * 1000;
$lockWaitStart = microtime(true);
while(!$activeLock) {
foreach ($lockPool as $lockHandler) {
if ($lockHandler->lock()) {
$activeLock = $lockHandler;
break 2;
}
}
if ($lockTimeout && ($lockTimeout > microtime(true) - $lockWaitStart)) {
break;
}
// Randomly wait between 0.1ms and 10ms
usleep(mt_rand(100, 10000));
}
A much better and efficient solution would be to use the semaphore extension and work some magic with ftok, shm_* and sem_*.
i suggest you to use a process control system as supervisor. it's pretty simple to use and you can choose how many instance of your script you start.
http://supervisord.org/
You could use a shared counter file which holds a counter that gets increased when the Command starts running and decreases it before it's finished.
Another solution would be checking the process list with something like this:
$processCount = exec('ps aux | grep "some part of the console command you run" | grep -v "grep" | wc -l' );
if(!empty($processCount) && $processCount >= X) {
return false;
}
You can create a "launcher command" executed by your cron or supervisor.
This Symfony commad can launch your instances with the process component on your server. You can also check whatever you want to check and do everything you want to do like the exec php function.

Spring MVC Controller firing 5 times per request! Why?

guys!
I have some services that should fire once a day. But for some reason when TWS fire the job it execute 5 times, with 15 minutes between each execution.
I don´t use any kind of schedule in my project.
The most strange part is that when I call from my browser the same URL the method execute only one time, but when TWS execute then we have that strange behavior.
#Controller
public class ArquivoController {
Calendar cal = null;
#Autowired
#Qualifier("arquivoService")
private ArquivoServiceImpl arquivoService;
#RequestMapping(value = "/valida")
public String valida(Locale locale,ModelMap model,HttpServletRequest request,#ModelAttribute Busca busca) throws Exception {
String indReprocessa = "";
String seqArq = (String) request.getParameter("Processar");
try {
logger.info("Carregando CDN...");
model.put("cdn",new CDN());
arquivoService.validaArquivos(busca,seqArq,indReprocessa);
model.addAttribute(class.atributoDeRetornoDoWebService.value(), class.sucesso.value());
} catch (final Exception e) {
model.addAttribute(class.atributoDeRetornoDoWebService.value(), class.erro.value());
}
//return Paginas.home.name();
return class.paginaDeRetornoDoWebService.value();
}
}
when I call the method in my browser:
http://server/contextName/valida
Fire only once.
if TWS fire by:
/appl/share/xxx/tws-chamada-wsrest.sh http://server/contextName/valida
it fires 5 times with 15 minutes interval.
Here shell script content.
URL_APLICACAO=$1
##############################
RESULTADO_WS=$( /cygdrive/d/jobs/bin/wget.exe $URL_APLICACAO -nv -q --no-proxy -O arquivo.log)
RETORNO= grep -c "RESULTADO_WS 1" arquivo.log;
echo $RETORNO
I´m using tomcat 7.0.57. I really need some help.
I´ve found the "problem".
wget have some default proprieties that explain thar behavior. By default wget get have a read timout of 900 seconds (15 minutes) and have a retry or 5 times.
I just have to set de parameters:
--read-timeout=0 (to disable timeout, in my case)
--tries=1 (to execute only once)
RESULTADO_WS=$(wget $URL_APLICACAO -nv -q --no-proxy --read-timeout=0 --tries=1 -O arquivo.log)
I hope that help someone.

How to check if a Process is Running or Not

I am starting a process using the below code
QProcess* process = new QProcess();
process->start(Path);
The start method will start a third party application.
If the process is already running, I should not call process->start(Path) again.
The process pointer is private member of the class.
From the docs for QProcess ...
There are at least 3 ways to check if a QProcess instance is running.
QProcess.pid() : If its running, the pid will be > 0
QProcess.state() : Check it again the ProcessState enum to see if its QProcess::NotRunning
QProcess.atEnd() : Its not running if this is true
If any of these are not working as you would expect, then you will need to post a specific case of that example.
To complement the #jdi's answer with a real-life code example:
QString executable = "C:/Program Files/tool.exe";
QProcess *process = new QProcess(this);
process->start(executable, QStringList());
// some code
if ( process->state() == QProcess::NotRunning ) {
// do something
};
QProcess::ProcessState constants are:
Constant Value Description
QProcess::NotRunning 0 The process is not running.
QProcess::Starting 1 The process is starting, but the program has not yet been invoked.
QProcess::Running 2 The process is running and is ready for reading and writing.
Documentation is here.

is node.js' console.log asynchronous?

Are console.log/debug/warn/error in node.js asynchrounous? I mean will javascript code execution halt till the stuff is printed on screen or will it print at a later stage?
Also, I am interested in knowing if it is possible for a console.log to NOT display anything if the statement immediately after it crashes node.
Update: Starting with Node 0.6 this post is obsolete, since stdout is synchronous now.
Well let's see what console.log actually does.
First of all it's part of the console module:
exports.log = function() {
process.stdout.write(format.apply(this, arguments) + '\n');
};
So it simply does some formatting and writes to process.stdout, nothing asynchronous so far.
process.stdout is a getter defined on startup which is lazily initialized, I've added some comments to explain things:
.... code here...
process.__defineGetter__('stdout', function() {
if (stdout) return stdout; // only initialize it once
/// many requires here ...
if (binding.isatty(fd)) { // a terminal? great!
stdout = new tty.WriteStream(fd);
} else if (binding.isStdoutBlocking()) { // a file?
stdout = new fs.WriteStream(null, {fd: fd});
} else {
stdout = new net.Stream(fd); // a stream?
// For example: node foo.js > out.txt
stdout.readable = false;
}
return stdout;
});
In case of a TTY and UNIX we end up here, this thing inherits from socket. So all that node bascially does is to push the data on to the socket, then the terminal takes care of the rest.
Let's test it!
var data = '111111111111111111111111111111111111111111111111111';
for(var i = 0, l = 12; i < l; i++) {
data += data; // warning! gets very large, very quick
}
var start = Date.now();
console.log(data);
console.log('wrote %d bytes in %dms', data.length, Date.now() - start);
Result
....a lot of ones....1111111111111111
wrote 208896 bytes in 17ms
real 0m0.969s
user 0m0.068s
sys 0m0.012s
The terminal needs around 1 seconds to print out the sockets content, but node only needs 17 milliseconds to push the data to the terminal.
The same goes for the stream case, and also the file case gets handle asynchronous.
So yes Node.js holds true to its non-blocking promises.
console.warn() and console.error() are blocking. They do not return until the underlying system calls have succeeded.
Yes, it is possible for a program to exit before everything written to stdout has been flushed. process.exit() will terminate node immediately, even if there are still queued writes to stdout. You should use console.warn to avoid this behavior.
My Conclusion , after reading Node.js 10.* docs (Attached below). is that you can use console.log for logging , console.log is synchronous and implemented in low level c .
Although console.log is synchronic, it wont cause a performance issue only if you are not logging huge amount of data.
(The command line example below demonstrate, console.log async and console.error is sync)
Based on Node.js Doc's
The console functions are synchronous when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it's a pipe (to avoid blocking for long periods of time).
That is, in the following example, stdout is non-blocking while stderr is blocking:
$ node script.js 2> error.log | tee info.log
In daily use, the blocking/non-blocking dichotomy is not something you should worry about unless you > log huge amounts of data.
Hope it helps
Console.log is asynchronous in windows while it is synchronous in linux/mac. To make console.log synchronous in windows write this line at the start of your
code probably in index.js file. Any console.log after this statement will be considered as synchronous by interpreter.
if (process.stdout._handle) process.stdout._handle.setBlocking(true);
You can use this for synchrounous logging:
const fs = require('fs')
fs.writeSync(1, 'Sync logging\n')

Resources